00:00:00.000 Started by upstream project "autotest-per-patch" build number 132348 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.069 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.070 The recommended git tool is: git 00:00:00.070 using credential 00000000-0000-0000-0000-000000000002 00:00:00.071 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.112 Fetching changes from the remote Git repository 00:00:00.114 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.175 Using shallow fetch with depth 1 00:00:00.175 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.175 > git --version # timeout=10 00:00:00.234 > git --version # 'git version 2.39.2' 00:00:00.234 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.272 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.272 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:12.351 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:12.365 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:12.378 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:12.378 > git config core.sparsecheckout # timeout=10 00:00:12.391 > git read-tree -mu HEAD # timeout=10 00:00:12.408 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:12.432 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:12.432 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:12.548 [Pipeline] Start of Pipeline 00:00:12.564 [Pipeline] library 00:00:12.567 Loading library shm_lib@master 00:00:12.567 Library shm_lib@master is cached. Copying from home. 00:00:12.585 [Pipeline] node 00:00:12.594 Running on CYP13 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:12.596 [Pipeline] { 00:00:12.607 [Pipeline] catchError 00:00:12.609 [Pipeline] { 00:00:12.623 [Pipeline] wrap 00:00:12.633 [Pipeline] { 00:00:12.642 [Pipeline] stage 00:00:12.644 [Pipeline] { (Prologue) 00:00:12.922 [Pipeline] sh 00:00:13.215 + logger -p user.info -t JENKINS-CI 00:00:13.235 [Pipeline] echo 00:00:13.237 Node: CYP13 00:00:13.245 [Pipeline] sh 00:00:13.554 [Pipeline] setCustomBuildProperty 00:00:13.567 [Pipeline] echo 00:00:13.569 Cleanup processes 00:00:13.575 [Pipeline] sh 00:00:13.866 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:13.866 3067688 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:13.881 [Pipeline] sh 00:00:14.172 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:14.172 ++ grep -v 'sudo pgrep' 00:00:14.172 ++ awk '{print $1}' 00:00:14.172 + sudo kill -9 00:00:14.172 + true 00:00:14.189 [Pipeline] cleanWs 00:00:14.200 [WS-CLEANUP] Deleting project workspace... 00:00:14.200 [WS-CLEANUP] Deferred wipeout is used... 00:00:14.207 [WS-CLEANUP] done 00:00:14.213 [Pipeline] setCustomBuildProperty 00:00:14.229 [Pipeline] sh 00:00:14.516 + sudo git config --global --replace-all safe.directory '*' 00:00:14.609 [Pipeline] httpRequest 00:00:14.995 [Pipeline] echo 00:00:14.998 Sorcerer 10.211.164.20 is alive 00:00:15.009 [Pipeline] retry 00:00:15.012 [Pipeline] { 00:00:15.031 [Pipeline] httpRequest 00:00:15.037 HttpMethod: GET 00:00:15.037 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.038 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.057 Response Code: HTTP/1.1 200 OK 00:00:15.057 Success: Status code 200 is in the accepted range: 200,404 00:00:15.058 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:18.640 [Pipeline] } 00:00:18.659 [Pipeline] // retry 00:00:18.666 [Pipeline] sh 00:00:18.956 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:18.975 [Pipeline] httpRequest 00:00:19.577 [Pipeline] echo 00:00:19.579 Sorcerer 10.211.164.20 is alive 00:00:19.589 [Pipeline] retry 00:00:19.591 [Pipeline] { 00:00:19.606 [Pipeline] httpRequest 00:00:19.611 HttpMethod: GET 00:00:19.611 URL: http://10.211.164.20/packages/spdk_12962b97e445e4180d15560c935c7e9c74cbcced.tar.gz 00:00:19.612 Sending request to url: http://10.211.164.20/packages/spdk_12962b97e445e4180d15560c935c7e9c74cbcced.tar.gz 00:00:19.633 Response Code: HTTP/1.1 200 OK 00:00:19.634 Success: Status code 200 is in the accepted range: 200,404 00:00:19.634 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_12962b97e445e4180d15560c935c7e9c74cbcced.tar.gz 00:00:54.615 [Pipeline] } 00:00:54.635 [Pipeline] // retry 00:00:54.644 [Pipeline] sh 00:00:54.941 + tar --no-same-owner -xf spdk_12962b97e445e4180d15560c935c7e9c74cbcced.tar.gz 00:00:58.272 [Pipeline] sh 00:00:58.559 + git -C spdk log --oneline -n5 00:00:58.559 12962b97e ut/bdev: Remove duplication with many stups among unit test files 00:00:58.559 8ccf9ce7b accel: Fix a bug that append_dif_generate_copy() did not set dif_ctx 00:00:58.559 ac2633210 accel: Fix comments for spdk_accel_*_dif_verify_copy() 00:00:58.559 3e396d94d bdev: Clean up duplicated asserts in bdev_io_pull_data() 00:00:58.559 ecdb65a23 bdev: Rename _bdev_memory_domain_io_get_buf() to bdev_io_get_bounce_buf() 00:00:58.571 [Pipeline] } 00:00:58.586 [Pipeline] // stage 00:00:58.594 [Pipeline] stage 00:00:58.596 [Pipeline] { (Prepare) 00:00:58.611 [Pipeline] writeFile 00:00:58.629 [Pipeline] sh 00:00:58.916 + logger -p user.info -t JENKINS-CI 00:00:58.929 [Pipeline] sh 00:00:59.215 + logger -p user.info -t JENKINS-CI 00:00:59.227 [Pipeline] sh 00:00:59.514 + cat autorun-spdk.conf 00:00:59.514 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.514 SPDK_TEST_NVMF=1 00:00:59.514 SPDK_TEST_NVME_CLI=1 00:00:59.514 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.514 SPDK_TEST_NVMF_NICS=e810 00:00:59.514 SPDK_TEST_VFIOUSER=1 00:00:59.514 SPDK_RUN_UBSAN=1 00:00:59.514 NET_TYPE=phy 00:00:59.522 RUN_NIGHTLY=0 00:00:59.527 [Pipeline] readFile 00:00:59.550 [Pipeline] withEnv 00:00:59.552 [Pipeline] { 00:00:59.564 [Pipeline] sh 00:00:59.853 + set -ex 00:00:59.853 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:59.853 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:59.853 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.853 ++ SPDK_TEST_NVMF=1 00:00:59.853 ++ SPDK_TEST_NVME_CLI=1 00:00:59.853 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.853 ++ SPDK_TEST_NVMF_NICS=e810 00:00:59.853 ++ SPDK_TEST_VFIOUSER=1 00:00:59.853 ++ SPDK_RUN_UBSAN=1 00:00:59.853 ++ NET_TYPE=phy 00:00:59.853 ++ RUN_NIGHTLY=0 00:00:59.853 + case $SPDK_TEST_NVMF_NICS in 00:00:59.853 + DRIVERS=ice 00:00:59.853 + [[ tcp == \r\d\m\a ]] 00:00:59.853 + [[ -n ice ]] 00:00:59.853 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:59.853 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:59.853 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:59.853 rmmod: ERROR: Module irdma is not currently loaded 00:00:59.853 rmmod: ERROR: Module i40iw is not currently loaded 00:00:59.853 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:59.853 + true 00:00:59.853 + for D in $DRIVERS 00:00:59.853 + sudo modprobe ice 00:00:59.853 + exit 0 00:00:59.864 [Pipeline] } 00:00:59.881 [Pipeline] // withEnv 00:00:59.887 [Pipeline] } 00:00:59.902 [Pipeline] // stage 00:00:59.914 [Pipeline] catchError 00:00:59.916 [Pipeline] { 00:00:59.935 [Pipeline] timeout 00:00:59.935 Timeout set to expire in 1 hr 0 min 00:00:59.938 [Pipeline] { 00:00:59.952 [Pipeline] stage 00:00:59.954 [Pipeline] { (Tests) 00:00:59.967 [Pipeline] sh 00:01:00.257 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.258 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.258 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.258 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:00.258 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.258 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.258 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:00.258 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.258 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.258 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.258 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:00.258 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.258 + source /etc/os-release 00:01:00.258 ++ NAME='Fedora Linux' 00:01:00.258 ++ VERSION='39 (Cloud Edition)' 00:01:00.258 ++ ID=fedora 00:01:00.258 ++ VERSION_ID=39 00:01:00.258 ++ VERSION_CODENAME= 00:01:00.258 ++ PLATFORM_ID=platform:f39 00:01:00.258 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:00.258 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:00.258 ++ LOGO=fedora-logo-icon 00:01:00.258 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:00.258 ++ HOME_URL=https://fedoraproject.org/ 00:01:00.258 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:00.258 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:00.258 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:00.258 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:00.258 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:00.258 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:00.258 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:00.258 ++ SUPPORT_END=2024-11-12 00:01:00.258 ++ VARIANT='Cloud Edition' 00:01:00.258 ++ VARIANT_ID=cloud 00:01:00.258 + uname -a 00:01:00.258 Linux spdk-cyp-13 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:00.258 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:03.564 Hugepages 00:01:03.564 node hugesize free / total 00:01:03.564 node0 1048576kB 0 / 0 00:01:03.564 node0 2048kB 0 / 0 00:01:03.564 node1 1048576kB 0 / 0 00:01:03.564 node1 2048kB 0 / 0 00:01:03.564 00:01:03.564 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:03.564 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:03.564 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:03.564 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:03.564 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:03.564 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:03.564 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:03.564 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:03.564 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:03.564 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:03.564 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:03.564 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:03.564 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:03.564 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:03.564 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:03.564 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:03.564 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:03.564 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:03.564 + rm -f /tmp/spdk-ld-path 00:01:03.564 + source autorun-spdk.conf 00:01:03.564 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.564 ++ SPDK_TEST_NVMF=1 00:01:03.564 ++ SPDK_TEST_NVME_CLI=1 00:01:03.564 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:03.564 ++ SPDK_TEST_NVMF_NICS=e810 00:01:03.564 ++ SPDK_TEST_VFIOUSER=1 00:01:03.564 ++ SPDK_RUN_UBSAN=1 00:01:03.564 ++ NET_TYPE=phy 00:01:03.564 ++ RUN_NIGHTLY=0 00:01:03.564 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:03.564 + [[ -n '' ]] 00:01:03.564 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:03.564 + for M in /var/spdk/build-*-manifest.txt 00:01:03.564 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:03.564 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:03.564 + for M in /var/spdk/build-*-manifest.txt 00:01:03.564 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:03.564 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:03.564 + for M in /var/spdk/build-*-manifest.txt 00:01:03.564 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:03.564 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:03.564 ++ uname 00:01:03.564 + [[ Linux == \L\i\n\u\x ]] 00:01:03.564 + sudo dmesg -T 00:01:03.564 + sudo dmesg --clear 00:01:03.564 + dmesg_pid=3069259 00:01:03.564 + [[ Fedora Linux == FreeBSD ]] 00:01:03.564 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:03.564 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:03.564 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:03.564 + [[ -x /usr/src/fio-static/fio ]] 00:01:03.564 + export FIO_BIN=/usr/src/fio-static/fio 00:01:03.564 + FIO_BIN=/usr/src/fio-static/fio 00:01:03.564 + sudo dmesg -Tw 00:01:03.564 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:03.564 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:03.564 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:03.564 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:03.564 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:03.564 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:03.564 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:03.564 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:03.564 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:03.825 07:15:21 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:03.825 07:15:21 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:03.825 07:15:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.825 07:15:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:03.825 07:15:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:03.825 07:15:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:03.825 07:15:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:03.825 07:15:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:03.825 07:15:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:03.825 07:15:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:03.825 07:15:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:03.825 07:15:21 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:03.825 07:15:21 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:03.825 07:15:21 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:03.825 07:15:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:03.825 07:15:21 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:03.825 07:15:21 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:03.825 07:15:21 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:03.825 07:15:21 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:03.825 07:15:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.825 07:15:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.826 07:15:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.826 07:15:21 -- paths/export.sh@5 -- $ export PATH 00:01:03.826 07:15:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.826 07:15:21 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:03.826 07:15:21 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:03.826 07:15:21 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732083321.XXXXXX 00:01:03.826 07:15:21 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732083321.6HBiGZ 00:01:03.826 07:15:21 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:03.826 07:15:21 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:03.826 07:15:21 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:03.826 07:15:21 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:03.826 07:15:21 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:03.826 07:15:21 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:03.826 07:15:21 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:03.826 07:15:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:03.826 07:15:21 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:03.826 07:15:21 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:03.826 07:15:21 -- pm/common@17 -- $ local monitor 00:01:03.826 07:15:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.826 07:15:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.826 07:15:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.826 07:15:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.826 07:15:21 -- pm/common@21 -- $ date +%s 00:01:03.826 07:15:21 -- pm/common@25 -- $ sleep 1 00:01:03.826 07:15:21 -- pm/common@21 -- $ date +%s 00:01:03.826 07:15:21 -- pm/common@21 -- $ date +%s 00:01:03.826 07:15:21 -- pm/common@21 -- $ date +%s 00:01:03.826 07:15:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732083321 00:01:03.826 07:15:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732083321 00:01:03.826 07:15:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732083321 00:01:03.826 07:15:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732083321 00:01:03.826 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732083321_collect-cpu-load.pm.log 00:01:03.826 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732083321_collect-vmstat.pm.log 00:01:03.826 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732083321_collect-cpu-temp.pm.log 00:01:03.826 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732083321_collect-bmc-pm.bmc.pm.log 00:01:04.768 07:15:22 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:04.768 07:15:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:04.768 07:15:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:04.768 07:15:22 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:04.768 07:15:22 -- spdk/autobuild.sh@16 -- $ date -u 00:01:04.768 Wed Nov 20 06:15:22 AM UTC 2024 00:01:04.768 07:15:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:04.768 v25.01-pre-199-g12962b97e 00:01:04.768 07:15:22 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:04.768 07:15:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:04.768 07:15:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:04.768 07:15:22 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:04.768 07:15:22 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:04.768 07:15:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:05.029 ************************************ 00:01:05.029 START TEST ubsan 00:01:05.029 ************************************ 00:01:05.029 07:15:22 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:05.029 using ubsan 00:01:05.029 00:01:05.029 real 0m0.001s 00:01:05.029 user 0m0.001s 00:01:05.029 sys 0m0.000s 00:01:05.029 07:15:22 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:05.029 07:15:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:05.029 ************************************ 00:01:05.029 END TEST ubsan 00:01:05.029 ************************************ 00:01:05.029 07:15:23 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:05.029 07:15:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:05.029 07:15:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:05.029 07:15:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:05.029 07:15:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:05.029 07:15:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:05.029 07:15:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:05.029 07:15:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:05.029 07:15:23 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:05.029 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:05.029 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:05.600 Using 'verbs' RDMA provider 00:01:21.461 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:33.701 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:34.273 Creating mk/config.mk...done. 00:01:34.273 Creating mk/cc.flags.mk...done. 00:01:34.273 Type 'make' to build. 00:01:34.273 07:15:52 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:34.273 07:15:52 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:34.273 07:15:52 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:34.273 07:15:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.273 ************************************ 00:01:34.273 START TEST make 00:01:34.273 ************************************ 00:01:34.273 07:15:52 make -- common/autotest_common.sh@1127 -- $ make -j144 00:01:34.846 make[1]: Nothing to be done for 'all'. 00:01:36.238 The Meson build system 00:01:36.238 Version: 1.5.0 00:01:36.238 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:36.238 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:36.238 Build type: native build 00:01:36.238 Project name: libvfio-user 00:01:36.238 Project version: 0.0.1 00:01:36.238 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:36.238 C linker for the host machine: cc ld.bfd 2.40-14 00:01:36.238 Host machine cpu family: x86_64 00:01:36.238 Host machine cpu: x86_64 00:01:36.238 Run-time dependency threads found: YES 00:01:36.238 Library dl found: YES 00:01:36.238 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:36.238 Run-time dependency json-c found: YES 0.17 00:01:36.238 Run-time dependency cmocka found: YES 1.1.7 00:01:36.238 Program pytest-3 found: NO 00:01:36.238 Program flake8 found: NO 00:01:36.238 Program misspell-fixer found: NO 00:01:36.238 Program restructuredtext-lint found: NO 00:01:36.238 Program valgrind found: YES (/usr/bin/valgrind) 00:01:36.238 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.238 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.238 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.238 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:36.238 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:36.238 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:36.238 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:36.238 Build targets in project: 8 00:01:36.238 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:36.238 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:36.238 00:01:36.238 libvfio-user 0.0.1 00:01:36.238 00:01:36.238 User defined options 00:01:36.238 buildtype : debug 00:01:36.238 default_library: shared 00:01:36.238 libdir : /usr/local/lib 00:01:36.238 00:01:36.238 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:36.498 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:36.758 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:36.758 [2/37] Compiling C object samples/null.p/null.c.o 00:01:36.758 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:36.758 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:36.758 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:36.758 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:36.758 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:36.758 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:36.758 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:36.758 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:36.758 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:36.758 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:36.758 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:36.758 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:36.758 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:36.758 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:36.758 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:36.758 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:36.758 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:36.758 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:36.758 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:36.758 [22/37] Compiling C object samples/server.p/server.c.o 00:01:36.758 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:36.758 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:36.758 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:36.758 [26/37] Compiling C object samples/client.p/client.c.o 00:01:36.758 [27/37] Linking target samples/client 00:01:37.018 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:37.018 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:37.018 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:37.018 [31/37] Linking target test/unit_tests 00:01:37.018 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:37.018 [33/37] Linking target samples/server 00:01:37.018 [34/37] Linking target samples/null 00:01:37.018 [35/37] Linking target samples/gpio-pci-idio-16 00:01:37.018 [36/37] Linking target samples/lspci 00:01:37.018 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:37.018 INFO: autodetecting backend as ninja 00:01:37.018 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:37.278 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:37.538 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:37.538 ninja: no work to do. 00:01:44.133 The Meson build system 00:01:44.133 Version: 1.5.0 00:01:44.133 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:44.133 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:44.133 Build type: native build 00:01:44.133 Program cat found: YES (/usr/bin/cat) 00:01:44.133 Project name: DPDK 00:01:44.133 Project version: 24.03.0 00:01:44.133 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:44.133 C linker for the host machine: cc ld.bfd 2.40-14 00:01:44.133 Host machine cpu family: x86_64 00:01:44.133 Host machine cpu: x86_64 00:01:44.133 Message: ## Building in Developer Mode ## 00:01:44.133 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:44.133 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:44.133 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:44.133 Program python3 found: YES (/usr/bin/python3) 00:01:44.133 Program cat found: YES (/usr/bin/cat) 00:01:44.133 Compiler for C supports arguments -march=native: YES 00:01:44.133 Checking for size of "void *" : 8 00:01:44.133 Checking for size of "void *" : 8 (cached) 00:01:44.133 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:44.133 Library m found: YES 00:01:44.133 Library numa found: YES 00:01:44.133 Has header "numaif.h" : YES 00:01:44.133 Library fdt found: NO 00:01:44.133 Library execinfo found: NO 00:01:44.133 Has header "execinfo.h" : YES 00:01:44.133 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:44.133 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:44.133 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:44.133 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:44.133 Run-time dependency openssl found: YES 3.1.1 00:01:44.133 Run-time dependency libpcap found: YES 1.10.4 00:01:44.133 Has header "pcap.h" with dependency libpcap: YES 00:01:44.133 Compiler for C supports arguments -Wcast-qual: YES 00:01:44.133 Compiler for C supports arguments -Wdeprecated: YES 00:01:44.133 Compiler for C supports arguments -Wformat: YES 00:01:44.133 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:44.133 Compiler for C supports arguments -Wformat-security: NO 00:01:44.133 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.133 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:44.133 Compiler for C supports arguments -Wnested-externs: YES 00:01:44.133 Compiler for C supports arguments -Wold-style-definition: YES 00:01:44.133 Compiler for C supports arguments -Wpointer-arith: YES 00:01:44.133 Compiler for C supports arguments -Wsign-compare: YES 00:01:44.133 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:44.133 Compiler for C supports arguments -Wundef: YES 00:01:44.133 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.133 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:44.133 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:44.133 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.133 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:44.133 Program objdump found: YES (/usr/bin/objdump) 00:01:44.133 Compiler for C supports arguments -mavx512f: YES 00:01:44.133 Checking if "AVX512 checking" compiles: YES 00:01:44.133 Fetching value of define "__SSE4_2__" : 1 00:01:44.133 Fetching value of define "__AES__" : 1 00:01:44.133 Fetching value of define "__AVX__" : 1 00:01:44.133 Fetching value of define "__AVX2__" : 1 00:01:44.133 Fetching value of define "__AVX512BW__" : 1 00:01:44.133 Fetching value of define "__AVX512CD__" : 1 00:01:44.133 Fetching value of define "__AVX512DQ__" : 1 00:01:44.133 Fetching value of define "__AVX512F__" : 1 00:01:44.133 Fetching value of define "__AVX512VL__" : 1 00:01:44.133 Fetching value of define "__PCLMUL__" : 1 00:01:44.133 Fetching value of define "__RDRND__" : 1 00:01:44.133 Fetching value of define "__RDSEED__" : 1 00:01:44.133 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:44.133 Fetching value of define "__znver1__" : (undefined) 00:01:44.133 Fetching value of define "__znver2__" : (undefined) 00:01:44.133 Fetching value of define "__znver3__" : (undefined) 00:01:44.133 Fetching value of define "__znver4__" : (undefined) 00:01:44.133 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:44.133 Message: lib/log: Defining dependency "log" 00:01:44.133 Message: lib/kvargs: Defining dependency "kvargs" 00:01:44.133 Message: lib/telemetry: Defining dependency "telemetry" 00:01:44.133 Checking for function "getentropy" : NO 00:01:44.133 Message: lib/eal: Defining dependency "eal" 00:01:44.133 Message: lib/ring: Defining dependency "ring" 00:01:44.133 Message: lib/rcu: Defining dependency "rcu" 00:01:44.133 Message: lib/mempool: Defining dependency "mempool" 00:01:44.133 Message: lib/mbuf: Defining dependency "mbuf" 00:01:44.133 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:44.133 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:44.133 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:44.133 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:44.133 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:44.133 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:44.133 Compiler for C supports arguments -mpclmul: YES 00:01:44.133 Compiler for C supports arguments -maes: YES 00:01:44.133 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:44.133 Compiler for C supports arguments -mavx512bw: YES 00:01:44.133 Compiler for C supports arguments -mavx512dq: YES 00:01:44.133 Compiler for C supports arguments -mavx512vl: YES 00:01:44.133 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:44.133 Compiler for C supports arguments -mavx2: YES 00:01:44.133 Compiler for C supports arguments -mavx: YES 00:01:44.133 Message: lib/net: Defining dependency "net" 00:01:44.133 Message: lib/meter: Defining dependency "meter" 00:01:44.133 Message: lib/ethdev: Defining dependency "ethdev" 00:01:44.133 Message: lib/pci: Defining dependency "pci" 00:01:44.133 Message: lib/cmdline: Defining dependency "cmdline" 00:01:44.133 Message: lib/hash: Defining dependency "hash" 00:01:44.133 Message: lib/timer: Defining dependency "timer" 00:01:44.133 Message: lib/compressdev: Defining dependency "compressdev" 00:01:44.133 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:44.133 Message: lib/dmadev: Defining dependency "dmadev" 00:01:44.133 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:44.133 Message: lib/power: Defining dependency "power" 00:01:44.133 Message: lib/reorder: Defining dependency "reorder" 00:01:44.133 Message: lib/security: Defining dependency "security" 00:01:44.133 Has header "linux/userfaultfd.h" : YES 00:01:44.133 Has header "linux/vduse.h" : YES 00:01:44.133 Message: lib/vhost: Defining dependency "vhost" 00:01:44.133 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:44.133 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:44.133 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:44.133 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:44.133 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:44.133 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:44.133 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:44.133 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:44.133 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:44.133 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:44.133 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:44.133 Configuring doxy-api-html.conf using configuration 00:01:44.133 Configuring doxy-api-man.conf using configuration 00:01:44.133 Program mandb found: YES (/usr/bin/mandb) 00:01:44.133 Program sphinx-build found: NO 00:01:44.133 Configuring rte_build_config.h using configuration 00:01:44.133 Message: 00:01:44.133 ================= 00:01:44.133 Applications Enabled 00:01:44.133 ================= 00:01:44.133 00:01:44.133 apps: 00:01:44.133 00:01:44.133 00:01:44.133 Message: 00:01:44.133 ================= 00:01:44.133 Libraries Enabled 00:01:44.133 ================= 00:01:44.133 00:01:44.133 libs: 00:01:44.133 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:44.133 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:44.133 cryptodev, dmadev, power, reorder, security, vhost, 00:01:44.133 00:01:44.133 Message: 00:01:44.133 =============== 00:01:44.133 Drivers Enabled 00:01:44.133 =============== 00:01:44.133 00:01:44.133 common: 00:01:44.133 00:01:44.133 bus: 00:01:44.133 pci, vdev, 00:01:44.133 mempool: 00:01:44.133 ring, 00:01:44.133 dma: 00:01:44.133 00:01:44.133 net: 00:01:44.133 00:01:44.133 crypto: 00:01:44.133 00:01:44.133 compress: 00:01:44.133 00:01:44.133 vdpa: 00:01:44.133 00:01:44.133 00:01:44.133 Message: 00:01:44.134 ================= 00:01:44.134 Content Skipped 00:01:44.134 ================= 00:01:44.134 00:01:44.134 apps: 00:01:44.134 dumpcap: explicitly disabled via build config 00:01:44.134 graph: explicitly disabled via build config 00:01:44.134 pdump: explicitly disabled via build config 00:01:44.134 proc-info: explicitly disabled via build config 00:01:44.134 test-acl: explicitly disabled via build config 00:01:44.134 test-bbdev: explicitly disabled via build config 00:01:44.134 test-cmdline: explicitly disabled via build config 00:01:44.134 test-compress-perf: explicitly disabled via build config 00:01:44.134 test-crypto-perf: explicitly disabled via build config 00:01:44.134 test-dma-perf: explicitly disabled via build config 00:01:44.134 test-eventdev: explicitly disabled via build config 00:01:44.134 test-fib: explicitly disabled via build config 00:01:44.134 test-flow-perf: explicitly disabled via build config 00:01:44.134 test-gpudev: explicitly disabled via build config 00:01:44.134 test-mldev: explicitly disabled via build config 00:01:44.134 test-pipeline: explicitly disabled via build config 00:01:44.134 test-pmd: explicitly disabled via build config 00:01:44.134 test-regex: explicitly disabled via build config 00:01:44.134 test-sad: explicitly disabled via build config 00:01:44.134 test-security-perf: explicitly disabled via build config 00:01:44.134 00:01:44.134 libs: 00:01:44.134 argparse: explicitly disabled via build config 00:01:44.134 metrics: explicitly disabled via build config 00:01:44.134 acl: explicitly disabled via build config 00:01:44.134 bbdev: explicitly disabled via build config 00:01:44.134 bitratestats: explicitly disabled via build config 00:01:44.134 bpf: explicitly disabled via build config 00:01:44.134 cfgfile: explicitly disabled via build config 00:01:44.134 distributor: explicitly disabled via build config 00:01:44.134 efd: explicitly disabled via build config 00:01:44.134 eventdev: explicitly disabled via build config 00:01:44.134 dispatcher: explicitly disabled via build config 00:01:44.134 gpudev: explicitly disabled via build config 00:01:44.134 gro: explicitly disabled via build config 00:01:44.134 gso: explicitly disabled via build config 00:01:44.134 ip_frag: explicitly disabled via build config 00:01:44.134 jobstats: explicitly disabled via build config 00:01:44.134 latencystats: explicitly disabled via build config 00:01:44.134 lpm: explicitly disabled via build config 00:01:44.134 member: explicitly disabled via build config 00:01:44.134 pcapng: explicitly disabled via build config 00:01:44.134 rawdev: explicitly disabled via build config 00:01:44.134 regexdev: explicitly disabled via build config 00:01:44.134 mldev: explicitly disabled via build config 00:01:44.134 rib: explicitly disabled via build config 00:01:44.134 sched: explicitly disabled via build config 00:01:44.134 stack: explicitly disabled via build config 00:01:44.134 ipsec: explicitly disabled via build config 00:01:44.134 pdcp: explicitly disabled via build config 00:01:44.134 fib: explicitly disabled via build config 00:01:44.134 port: explicitly disabled via build config 00:01:44.134 pdump: explicitly disabled via build config 00:01:44.134 table: explicitly disabled via build config 00:01:44.134 pipeline: explicitly disabled via build config 00:01:44.134 graph: explicitly disabled via build config 00:01:44.134 node: explicitly disabled via build config 00:01:44.134 00:01:44.134 drivers: 00:01:44.134 common/cpt: not in enabled drivers build config 00:01:44.134 common/dpaax: not in enabled drivers build config 00:01:44.134 common/iavf: not in enabled drivers build config 00:01:44.134 common/idpf: not in enabled drivers build config 00:01:44.134 common/ionic: not in enabled drivers build config 00:01:44.134 common/mvep: not in enabled drivers build config 00:01:44.134 common/octeontx: not in enabled drivers build config 00:01:44.134 bus/auxiliary: not in enabled drivers build config 00:01:44.134 bus/cdx: not in enabled drivers build config 00:01:44.134 bus/dpaa: not in enabled drivers build config 00:01:44.134 bus/fslmc: not in enabled drivers build config 00:01:44.134 bus/ifpga: not in enabled drivers build config 00:01:44.134 bus/platform: not in enabled drivers build config 00:01:44.134 bus/uacce: not in enabled drivers build config 00:01:44.134 bus/vmbus: not in enabled drivers build config 00:01:44.134 common/cnxk: not in enabled drivers build config 00:01:44.134 common/mlx5: not in enabled drivers build config 00:01:44.134 common/nfp: not in enabled drivers build config 00:01:44.134 common/nitrox: not in enabled drivers build config 00:01:44.134 common/qat: not in enabled drivers build config 00:01:44.134 common/sfc_efx: not in enabled drivers build config 00:01:44.134 mempool/bucket: not in enabled drivers build config 00:01:44.134 mempool/cnxk: not in enabled drivers build config 00:01:44.134 mempool/dpaa: not in enabled drivers build config 00:01:44.134 mempool/dpaa2: not in enabled drivers build config 00:01:44.134 mempool/octeontx: not in enabled drivers build config 00:01:44.134 mempool/stack: not in enabled drivers build config 00:01:44.134 dma/cnxk: not in enabled drivers build config 00:01:44.134 dma/dpaa: not in enabled drivers build config 00:01:44.134 dma/dpaa2: not in enabled drivers build config 00:01:44.134 dma/hisilicon: not in enabled drivers build config 00:01:44.134 dma/idxd: not in enabled drivers build config 00:01:44.134 dma/ioat: not in enabled drivers build config 00:01:44.134 dma/skeleton: not in enabled drivers build config 00:01:44.134 net/af_packet: not in enabled drivers build config 00:01:44.134 net/af_xdp: not in enabled drivers build config 00:01:44.134 net/ark: not in enabled drivers build config 00:01:44.134 net/atlantic: not in enabled drivers build config 00:01:44.134 net/avp: not in enabled drivers build config 00:01:44.134 net/axgbe: not in enabled drivers build config 00:01:44.134 net/bnx2x: not in enabled drivers build config 00:01:44.134 net/bnxt: not in enabled drivers build config 00:01:44.134 net/bonding: not in enabled drivers build config 00:01:44.134 net/cnxk: not in enabled drivers build config 00:01:44.134 net/cpfl: not in enabled drivers build config 00:01:44.134 net/cxgbe: not in enabled drivers build config 00:01:44.134 net/dpaa: not in enabled drivers build config 00:01:44.134 net/dpaa2: not in enabled drivers build config 00:01:44.134 net/e1000: not in enabled drivers build config 00:01:44.134 net/ena: not in enabled drivers build config 00:01:44.134 net/enetc: not in enabled drivers build config 00:01:44.134 net/enetfec: not in enabled drivers build config 00:01:44.134 net/enic: not in enabled drivers build config 00:01:44.134 net/failsafe: not in enabled drivers build config 00:01:44.134 net/fm10k: not in enabled drivers build config 00:01:44.134 net/gve: not in enabled drivers build config 00:01:44.134 net/hinic: not in enabled drivers build config 00:01:44.134 net/hns3: not in enabled drivers build config 00:01:44.134 net/i40e: not in enabled drivers build config 00:01:44.134 net/iavf: not in enabled drivers build config 00:01:44.134 net/ice: not in enabled drivers build config 00:01:44.134 net/idpf: not in enabled drivers build config 00:01:44.134 net/igc: not in enabled drivers build config 00:01:44.134 net/ionic: not in enabled drivers build config 00:01:44.134 net/ipn3ke: not in enabled drivers build config 00:01:44.134 net/ixgbe: not in enabled drivers build config 00:01:44.134 net/mana: not in enabled drivers build config 00:01:44.134 net/memif: not in enabled drivers build config 00:01:44.134 net/mlx4: not in enabled drivers build config 00:01:44.134 net/mlx5: not in enabled drivers build config 00:01:44.134 net/mvneta: not in enabled drivers build config 00:01:44.134 net/mvpp2: not in enabled drivers build config 00:01:44.134 net/netvsc: not in enabled drivers build config 00:01:44.134 net/nfb: not in enabled drivers build config 00:01:44.134 net/nfp: not in enabled drivers build config 00:01:44.134 net/ngbe: not in enabled drivers build config 00:01:44.134 net/null: not in enabled drivers build config 00:01:44.134 net/octeontx: not in enabled drivers build config 00:01:44.134 net/octeon_ep: not in enabled drivers build config 00:01:44.134 net/pcap: not in enabled drivers build config 00:01:44.134 net/pfe: not in enabled drivers build config 00:01:44.134 net/qede: not in enabled drivers build config 00:01:44.134 net/ring: not in enabled drivers build config 00:01:44.134 net/sfc: not in enabled drivers build config 00:01:44.134 net/softnic: not in enabled drivers build config 00:01:44.134 net/tap: not in enabled drivers build config 00:01:44.134 net/thunderx: not in enabled drivers build config 00:01:44.134 net/txgbe: not in enabled drivers build config 00:01:44.134 net/vdev_netvsc: not in enabled drivers build config 00:01:44.134 net/vhost: not in enabled drivers build config 00:01:44.134 net/virtio: not in enabled drivers build config 00:01:44.134 net/vmxnet3: not in enabled drivers build config 00:01:44.134 raw/*: missing internal dependency, "rawdev" 00:01:44.134 crypto/armv8: not in enabled drivers build config 00:01:44.134 crypto/bcmfs: not in enabled drivers build config 00:01:44.134 crypto/caam_jr: not in enabled drivers build config 00:01:44.134 crypto/ccp: not in enabled drivers build config 00:01:44.134 crypto/cnxk: not in enabled drivers build config 00:01:44.134 crypto/dpaa_sec: not in enabled drivers build config 00:01:44.134 crypto/dpaa2_sec: not in enabled drivers build config 00:01:44.134 crypto/ipsec_mb: not in enabled drivers build config 00:01:44.134 crypto/mlx5: not in enabled drivers build config 00:01:44.134 crypto/mvsam: not in enabled drivers build config 00:01:44.134 crypto/nitrox: not in enabled drivers build config 00:01:44.134 crypto/null: not in enabled drivers build config 00:01:44.134 crypto/octeontx: not in enabled drivers build config 00:01:44.134 crypto/openssl: not in enabled drivers build config 00:01:44.134 crypto/scheduler: not in enabled drivers build config 00:01:44.134 crypto/uadk: not in enabled drivers build config 00:01:44.134 crypto/virtio: not in enabled drivers build config 00:01:44.134 compress/isal: not in enabled drivers build config 00:01:44.134 compress/mlx5: not in enabled drivers build config 00:01:44.134 compress/nitrox: not in enabled drivers build config 00:01:44.134 compress/octeontx: not in enabled drivers build config 00:01:44.134 compress/zlib: not in enabled drivers build config 00:01:44.134 regex/*: missing internal dependency, "regexdev" 00:01:44.134 ml/*: missing internal dependency, "mldev" 00:01:44.134 vdpa/ifc: not in enabled drivers build config 00:01:44.134 vdpa/mlx5: not in enabled drivers build config 00:01:44.134 vdpa/nfp: not in enabled drivers build config 00:01:44.134 vdpa/sfc: not in enabled drivers build config 00:01:44.135 event/*: missing internal dependency, "eventdev" 00:01:44.135 baseband/*: missing internal dependency, "bbdev" 00:01:44.135 gpu/*: missing internal dependency, "gpudev" 00:01:44.135 00:01:44.135 00:01:44.135 Build targets in project: 84 00:01:44.135 00:01:44.135 DPDK 24.03.0 00:01:44.135 00:01:44.135 User defined options 00:01:44.135 buildtype : debug 00:01:44.135 default_library : shared 00:01:44.135 libdir : lib 00:01:44.135 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:44.135 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:44.135 c_link_args : 00:01:44.135 cpu_instruction_set: native 00:01:44.135 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:44.135 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:44.135 enable_docs : false 00:01:44.135 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:44.135 enable_kmods : false 00:01:44.135 max_lcores : 128 00:01:44.135 tests : false 00:01:44.135 00:01:44.135 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.135 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:44.135 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:44.135 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:44.135 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:44.135 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:44.135 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:44.135 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:44.135 [7/267] Linking static target lib/librte_kvargs.a 00:01:44.135 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:44.135 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:44.135 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:44.135 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:44.135 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:44.135 [13/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:44.135 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:44.135 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:44.135 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:44.135 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:44.135 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:44.135 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:44.135 [20/267] Linking static target lib/librte_log.a 00:01:44.135 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:44.135 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:44.135 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:44.394 [24/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:44.394 [25/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:44.394 [26/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:44.394 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:44.394 [28/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:44.394 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:44.394 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:44.394 [31/267] Linking static target lib/librte_pci.a 00:01:44.394 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:44.394 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:44.394 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:44.394 [35/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:44.394 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:44.394 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:44.394 [38/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:44.653 [39/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.653 [40/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:44.653 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:44.653 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:44.653 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:44.653 [44/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.653 [45/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:44.653 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:44.653 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:44.653 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:44.653 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:44.653 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:44.653 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:44.653 [52/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:44.653 [53/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:44.653 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:44.653 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:44.653 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:44.653 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:44.653 [58/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:44.653 [59/267] Linking static target lib/librte_timer.a 00:01:44.653 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:44.653 [61/267] Linking static target lib/librte_meter.a 00:01:44.653 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:44.653 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:44.653 [64/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:44.653 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:44.653 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:44.653 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:44.653 [68/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:44.653 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:44.653 [70/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:44.653 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:44.653 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:44.653 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:44.653 [74/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:44.653 [75/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:44.653 [76/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:44.653 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:44.653 [78/267] Linking static target lib/librte_telemetry.a 00:01:44.653 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:44.653 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:44.653 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:44.653 [82/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:44.653 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:44.653 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:44.653 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:44.654 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:44.654 [87/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:44.654 [88/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:44.654 [89/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:44.654 [90/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:44.654 [91/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:44.654 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:44.654 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:44.654 [94/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:44.654 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:44.654 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:44.654 [97/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:44.654 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:44.654 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:44.654 [100/267] Linking static target lib/librte_ring.a 00:01:44.654 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:44.654 [102/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:44.654 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:44.654 [104/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:44.654 [105/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:44.654 [106/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:44.654 [107/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:44.654 [108/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:44.654 [109/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:44.654 [110/267] Linking static target lib/librte_cmdline.a 00:01:44.654 [111/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:44.654 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:44.654 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:44.916 [114/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:44.916 [115/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:44.916 [116/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:44.916 [117/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:44.916 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:44.916 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:44.916 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:44.916 [121/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:44.916 [122/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:44.916 [123/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:44.916 [124/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:44.916 [125/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:44.916 [126/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:44.916 [127/267] Linking static target lib/librte_dmadev.a 00:01:44.916 [128/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:44.916 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:44.916 [130/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:44.916 [131/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:44.916 [132/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:44.916 [133/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:44.916 [134/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:44.916 [135/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:44.916 [136/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:44.916 [137/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:44.916 [138/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:44.916 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:44.916 [140/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:44.916 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:44.916 [142/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:44.916 [143/267] Linking static target lib/librte_rcu.a 00:01:44.916 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:44.916 [145/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:44.916 [146/267] Linking static target lib/librte_mempool.a 00:01:44.916 [147/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:44.916 [148/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:44.916 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:44.916 [150/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:44.916 [151/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.916 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:44.916 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:44.916 [154/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:44.916 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:44.916 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:44.916 [157/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:44.916 [158/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:44.916 [159/267] Linking static target lib/librte_net.a 00:01:44.916 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:44.916 [161/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:44.916 [162/267] Linking static target lib/librte_compressdev.a 00:01:44.916 [163/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:44.916 [164/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:44.916 [165/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:44.916 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:44.916 [167/267] Linking target lib/librte_log.so.24.1 00:01:44.916 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:44.916 [169/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:44.916 [170/267] Linking static target lib/librte_power.a 00:01:44.916 [171/267] Linking static target lib/librte_eal.a 00:01:44.916 [172/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:44.916 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:44.917 [174/267] Linking static target lib/librte_reorder.a 00:01:44.917 [175/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:44.917 [176/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.917 [177/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:44.917 [178/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:44.917 [179/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:44.917 [180/267] Linking static target lib/librte_mbuf.a 00:01:44.917 [181/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:44.917 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:44.917 [183/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:44.917 [184/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:44.917 [185/267] Linking static target lib/librte_security.a 00:01:44.917 [186/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:44.917 [187/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:45.178 [188/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:45.178 [189/267] Linking static target drivers/librte_bus_vdev.a 00:01:45.178 [190/267] Linking target lib/librte_kvargs.so.24.1 00:01:45.178 [191/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:45.178 [192/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.178 [193/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.178 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:45.178 [195/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:45.178 [196/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:45.178 [197/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:45.178 [198/267] Linking static target lib/librte_hash.a 00:01:45.178 [199/267] Linking static target drivers/librte_bus_pci.a 00:01:45.178 [200/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:45.178 [201/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:45.178 [202/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:45.178 [203/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:45.178 [204/267] Linking static target drivers/librte_mempool_ring.a 00:01:45.178 [205/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.178 [206/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.178 [207/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.178 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:45.178 [209/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:45.178 [210/267] Linking static target lib/librte_cryptodev.a 00:01:45.440 [211/267] Linking target lib/librte_telemetry.so.24.1 00:01:45.440 [212/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:45.440 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.440 [214/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.700 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.700 [216/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:45.700 [217/267] Linking static target lib/librte_ethdev.a 00:01:45.700 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.700 [219/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.700 [220/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:45.959 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.959 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.959 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.959 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.219 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.219 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.789 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:46.789 [228/267] Linking static target lib/librte_vhost.a 00:01:47.359 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.268 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.846 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.416 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.416 [233/267] Linking target lib/librte_eal.so.24.1 00:01:56.676 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:56.676 [235/267] Linking target lib/librte_ring.so.24.1 00:01:56.676 [236/267] Linking target lib/librte_meter.so.24.1 00:01:56.676 [237/267] Linking target lib/librte_pci.so.24.1 00:01:56.676 [238/267] Linking target lib/librte_timer.so.24.1 00:01:56.676 [239/267] Linking target lib/librte_dmadev.so.24.1 00:01:56.676 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:56.944 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:56.944 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:56.944 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:56.944 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:56.944 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:56.944 [246/267] Linking target lib/librte_rcu.so.24.1 00:01:56.944 [247/267] Linking target lib/librte_mempool.so.24.1 00:01:56.944 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:56.944 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:56.944 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:56.944 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:56.944 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:57.234 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:57.234 [254/267] Linking target lib/librte_reorder.so.24.1 00:01:57.234 [255/267] Linking target lib/librte_compressdev.so.24.1 00:01:57.234 [256/267] Linking target lib/librte_net.so.24.1 00:01:57.234 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:57.234 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:57.543 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:57.543 [260/267] Linking target lib/librte_cmdline.so.24.1 00:01:57.543 [261/267] Linking target lib/librte_hash.so.24.1 00:01:57.543 [262/267] Linking target lib/librte_security.so.24.1 00:01:57.543 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:57.543 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:57.543 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:57.543 [266/267] Linking target lib/librte_power.so.24.1 00:01:57.543 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:57.543 INFO: autodetecting backend as ninja 00:01:57.543 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:01.785 CC lib/ut/ut.o 00:02:01.785 CC lib/log/log.o 00:02:01.785 CC lib/log/log_flags.o 00:02:01.785 CC lib/ut_mock/mock.o 00:02:01.785 CC lib/log/log_deprecated.o 00:02:01.785 LIB libspdk_log.a 00:02:01.785 LIB libspdk_ut.a 00:02:01.785 LIB libspdk_ut_mock.a 00:02:01.785 SO libspdk_log.so.7.1 00:02:01.785 SO libspdk_ut.so.2.0 00:02:01.785 SO libspdk_ut_mock.so.6.0 00:02:01.785 SYMLINK libspdk_log.so 00:02:01.785 SYMLINK libspdk_ut.so 00:02:01.785 SYMLINK libspdk_ut_mock.so 00:02:02.046 CC lib/util/base64.o 00:02:02.046 CC lib/util/bit_array.o 00:02:02.046 CC lib/util/cpuset.o 00:02:02.046 CC lib/util/crc16.o 00:02:02.046 CC lib/util/crc32.o 00:02:02.046 CC lib/util/crc32c.o 00:02:02.046 CXX lib/trace_parser/trace.o 00:02:02.046 CC lib/util/crc32_ieee.o 00:02:02.046 CC lib/util/crc64.o 00:02:02.046 CC lib/util/dif.o 00:02:02.046 CC lib/dma/dma.o 00:02:02.046 CC lib/util/fd.o 00:02:02.046 CC lib/ioat/ioat.o 00:02:02.046 CC lib/util/fd_group.o 00:02:02.046 CC lib/util/file.o 00:02:02.046 CC lib/util/iov.o 00:02:02.046 CC lib/util/hexlify.o 00:02:02.046 CC lib/util/math.o 00:02:02.046 CC lib/util/net.o 00:02:02.046 CC lib/util/pipe.o 00:02:02.046 CC lib/util/strerror_tls.o 00:02:02.046 CC lib/util/string.o 00:02:02.046 CC lib/util/uuid.o 00:02:02.046 CC lib/util/xor.o 00:02:02.046 CC lib/util/zipf.o 00:02:02.046 CC lib/util/md5.o 00:02:02.308 CC lib/vfio_user/host/vfio_user.o 00:02:02.308 CC lib/vfio_user/host/vfio_user_pci.o 00:02:02.308 LIB libspdk_dma.a 00:02:02.308 SO libspdk_dma.so.5.0 00:02:02.308 LIB libspdk_ioat.a 00:02:02.308 SYMLINK libspdk_dma.so 00:02:02.308 SO libspdk_ioat.so.7.0 00:02:02.570 SYMLINK libspdk_ioat.so 00:02:02.570 LIB libspdk_vfio_user.a 00:02:02.570 SO libspdk_vfio_user.so.5.0 00:02:02.570 LIB libspdk_util.a 00:02:02.570 SYMLINK libspdk_vfio_user.so 00:02:02.570 SO libspdk_util.so.10.1 00:02:02.832 SYMLINK libspdk_util.so 00:02:02.832 LIB libspdk_trace_parser.a 00:02:03.094 SO libspdk_trace_parser.so.6.0 00:02:03.094 SYMLINK libspdk_trace_parser.so 00:02:03.094 CC lib/conf/conf.o 00:02:03.094 CC lib/json/json_parse.o 00:02:03.094 CC lib/json/json_util.o 00:02:03.094 CC lib/json/json_write.o 00:02:03.094 CC lib/env_dpdk/env.o 00:02:03.094 CC lib/rdma_utils/rdma_utils.o 00:02:03.094 CC lib/env_dpdk/memory.o 00:02:03.094 CC lib/vmd/vmd.o 00:02:03.094 CC lib/env_dpdk/pci.o 00:02:03.094 CC lib/env_dpdk/init.o 00:02:03.094 CC lib/vmd/led.o 00:02:03.094 CC lib/env_dpdk/threads.o 00:02:03.094 CC lib/env_dpdk/pci_ioat.o 00:02:03.094 CC lib/env_dpdk/pci_virtio.o 00:02:03.094 CC lib/env_dpdk/pci_vmd.o 00:02:03.094 CC lib/idxd/idxd.o 00:02:03.094 CC lib/env_dpdk/pci_idxd.o 00:02:03.094 CC lib/env_dpdk/pci_event.o 00:02:03.094 CC lib/idxd/idxd_user.o 00:02:03.094 CC lib/env_dpdk/sigbus_handler.o 00:02:03.094 CC lib/idxd/idxd_kernel.o 00:02:03.094 CC lib/env_dpdk/pci_dpdk.o 00:02:03.094 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:03.094 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:03.355 LIB libspdk_conf.a 00:02:03.616 LIB libspdk_rdma_utils.a 00:02:03.616 SO libspdk_conf.so.6.0 00:02:03.616 LIB libspdk_json.a 00:02:03.616 SO libspdk_rdma_utils.so.1.0 00:02:03.616 SO libspdk_json.so.6.0 00:02:03.616 SYMLINK libspdk_conf.so 00:02:03.616 SYMLINK libspdk_rdma_utils.so 00:02:03.616 SYMLINK libspdk_json.so 00:02:03.877 LIB libspdk_idxd.a 00:02:03.877 LIB libspdk_vmd.a 00:02:03.877 SO libspdk_idxd.so.12.1 00:02:03.877 SO libspdk_vmd.so.6.0 00:02:03.877 SYMLINK libspdk_idxd.so 00:02:03.877 SYMLINK libspdk_vmd.so 00:02:03.877 CC lib/rdma_provider/common.o 00:02:03.877 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:03.877 CC lib/jsonrpc/jsonrpc_server.o 00:02:03.877 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:03.877 CC lib/jsonrpc/jsonrpc_client.o 00:02:03.877 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:04.139 LIB libspdk_rdma_provider.a 00:02:04.139 SO libspdk_rdma_provider.so.7.0 00:02:04.399 LIB libspdk_jsonrpc.a 00:02:04.399 SYMLINK libspdk_rdma_provider.so 00:02:04.399 SO libspdk_jsonrpc.so.6.0 00:02:04.399 SYMLINK libspdk_jsonrpc.so 00:02:04.399 LIB libspdk_env_dpdk.a 00:02:04.660 SO libspdk_env_dpdk.so.15.1 00:02:04.660 SYMLINK libspdk_env_dpdk.so 00:02:04.660 CC lib/rpc/rpc.o 00:02:04.921 LIB libspdk_rpc.a 00:02:04.921 SO libspdk_rpc.so.6.0 00:02:05.221 SYMLINK libspdk_rpc.so 00:02:05.482 CC lib/trace/trace.o 00:02:05.482 CC lib/notify/notify.o 00:02:05.482 CC lib/trace/trace_flags.o 00:02:05.482 CC lib/notify/notify_rpc.o 00:02:05.482 CC lib/keyring/keyring.o 00:02:05.482 CC lib/trace/trace_rpc.o 00:02:05.482 CC lib/keyring/keyring_rpc.o 00:02:05.743 LIB libspdk_notify.a 00:02:05.743 SO libspdk_notify.so.6.0 00:02:05.743 LIB libspdk_keyring.a 00:02:05.743 LIB libspdk_trace.a 00:02:05.743 SYMLINK libspdk_notify.so 00:02:05.743 SO libspdk_keyring.so.2.0 00:02:05.743 SO libspdk_trace.so.11.0 00:02:05.743 SYMLINK libspdk_keyring.so 00:02:05.743 SYMLINK libspdk_trace.so 00:02:06.313 CC lib/sock/sock.o 00:02:06.313 CC lib/thread/thread.o 00:02:06.313 CC lib/sock/sock_rpc.o 00:02:06.313 CC lib/thread/iobuf.o 00:02:06.574 LIB libspdk_sock.a 00:02:06.574 SO libspdk_sock.so.10.0 00:02:06.834 SYMLINK libspdk_sock.so 00:02:07.095 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:07.095 CC lib/nvme/nvme_ctrlr.o 00:02:07.095 CC lib/nvme/nvme_fabric.o 00:02:07.095 CC lib/nvme/nvme_ns_cmd.o 00:02:07.095 CC lib/nvme/nvme_ns.o 00:02:07.095 CC lib/nvme/nvme_pcie_common.o 00:02:07.095 CC lib/nvme/nvme_pcie.o 00:02:07.095 CC lib/nvme/nvme_qpair.o 00:02:07.095 CC lib/nvme/nvme.o 00:02:07.095 CC lib/nvme/nvme_quirks.o 00:02:07.095 CC lib/nvme/nvme_transport.o 00:02:07.095 CC lib/nvme/nvme_discovery.o 00:02:07.095 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:07.095 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:07.095 CC lib/nvme/nvme_tcp.o 00:02:07.095 CC lib/nvme/nvme_opal.o 00:02:07.095 CC lib/nvme/nvme_io_msg.o 00:02:07.095 CC lib/nvme/nvme_poll_group.o 00:02:07.095 CC lib/nvme/nvme_zns.o 00:02:07.095 CC lib/nvme/nvme_stubs.o 00:02:07.095 CC lib/nvme/nvme_auth.o 00:02:07.095 CC lib/nvme/nvme_cuse.o 00:02:07.095 CC lib/nvme/nvme_vfio_user.o 00:02:07.095 CC lib/nvme/nvme_rdma.o 00:02:07.667 LIB libspdk_thread.a 00:02:07.667 SO libspdk_thread.so.11.0 00:02:07.667 SYMLINK libspdk_thread.so 00:02:07.928 CC lib/accel/accel.o 00:02:07.928 CC lib/accel/accel_rpc.o 00:02:07.928 CC lib/accel/accel_sw.o 00:02:07.928 CC lib/fsdev/fsdev.o 00:02:07.928 CC lib/fsdev/fsdev_io.o 00:02:07.928 CC lib/init/json_config.o 00:02:07.928 CC lib/fsdev/fsdev_rpc.o 00:02:07.928 CC lib/init/subsystem.o 00:02:07.928 CC lib/init/subsystem_rpc.o 00:02:07.928 CC lib/virtio/virtio.o 00:02:07.928 CC lib/init/rpc.o 00:02:07.928 CC lib/virtio/virtio_vhost_user.o 00:02:07.928 CC lib/virtio/virtio_vfio_user.o 00:02:07.928 CC lib/vfu_tgt/tgt_endpoint.o 00:02:07.928 CC lib/vfu_tgt/tgt_rpc.o 00:02:07.928 CC lib/virtio/virtio_pci.o 00:02:07.928 CC lib/blob/blobstore.o 00:02:07.928 CC lib/blob/request.o 00:02:07.928 CC lib/blob/zeroes.o 00:02:07.928 CC lib/blob/blob_bs_dev.o 00:02:08.190 LIB libspdk_init.a 00:02:08.450 SO libspdk_init.so.6.0 00:02:08.450 LIB libspdk_virtio.a 00:02:08.450 LIB libspdk_vfu_tgt.a 00:02:08.450 SYMLINK libspdk_init.so 00:02:08.450 SO libspdk_virtio.so.7.0 00:02:08.450 SO libspdk_vfu_tgt.so.3.0 00:02:08.450 SYMLINK libspdk_vfu_tgt.so 00:02:08.450 SYMLINK libspdk_virtio.so 00:02:08.711 LIB libspdk_fsdev.a 00:02:08.711 SO libspdk_fsdev.so.2.0 00:02:08.711 CC lib/event/app.o 00:02:08.711 CC lib/event/reactor.o 00:02:08.711 CC lib/event/log_rpc.o 00:02:08.711 CC lib/event/app_rpc.o 00:02:08.711 CC lib/event/scheduler_static.o 00:02:08.711 SYMLINK libspdk_fsdev.so 00:02:08.973 LIB libspdk_accel.a 00:02:08.973 SO libspdk_accel.so.16.0 00:02:08.973 LIB libspdk_nvme.a 00:02:09.234 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:09.234 SYMLINK libspdk_accel.so 00:02:09.234 LIB libspdk_event.a 00:02:09.234 SO libspdk_nvme.so.15.0 00:02:09.234 SO libspdk_event.so.14.0 00:02:09.234 SYMLINK libspdk_event.so 00:02:09.495 SYMLINK libspdk_nvme.so 00:02:09.495 CC lib/bdev/bdev.o 00:02:09.495 CC lib/bdev/bdev_rpc.o 00:02:09.495 CC lib/bdev/bdev_zone.o 00:02:09.495 CC lib/bdev/part.o 00:02:09.495 CC lib/bdev/scsi_nvme.o 00:02:09.756 LIB libspdk_fuse_dispatcher.a 00:02:09.756 SO libspdk_fuse_dispatcher.so.1.0 00:02:09.756 SYMLINK libspdk_fuse_dispatcher.so 00:02:10.699 LIB libspdk_blob.a 00:02:10.699 SO libspdk_blob.so.11.0 00:02:10.959 SYMLINK libspdk_blob.so 00:02:11.219 CC lib/blobfs/blobfs.o 00:02:11.219 CC lib/blobfs/tree.o 00:02:11.219 CC lib/lvol/lvol.o 00:02:11.792 LIB libspdk_bdev.a 00:02:11.792 SO libspdk_bdev.so.17.0 00:02:12.054 LIB libspdk_blobfs.a 00:02:12.054 SYMLINK libspdk_bdev.so 00:02:12.054 SO libspdk_blobfs.so.10.0 00:02:12.054 LIB libspdk_lvol.a 00:02:12.054 SYMLINK libspdk_blobfs.so 00:02:12.054 SO libspdk_lvol.so.10.0 00:02:12.317 SYMLINK libspdk_lvol.so 00:02:12.317 CC lib/ublk/ublk.o 00:02:12.317 CC lib/ublk/ublk_rpc.o 00:02:12.317 CC lib/nbd/nbd.o 00:02:12.317 CC lib/nbd/nbd_rpc.o 00:02:12.317 CC lib/nvmf/ctrlr.o 00:02:12.317 CC lib/nvmf/ctrlr_discovery.o 00:02:12.317 CC lib/ftl/ftl_core.o 00:02:12.317 CC lib/nvmf/ctrlr_bdev.o 00:02:12.317 CC lib/ftl/ftl_init.o 00:02:12.317 CC lib/nvmf/subsystem.o 00:02:12.317 CC lib/scsi/dev.o 00:02:12.317 CC lib/ftl/ftl_layout.o 00:02:12.317 CC lib/nvmf/nvmf.o 00:02:12.317 CC lib/ftl/ftl_debug.o 00:02:12.317 CC lib/scsi/lun.o 00:02:12.317 CC lib/nvmf/nvmf_rpc.o 00:02:12.317 CC lib/scsi/port.o 00:02:12.317 CC lib/ftl/ftl_io.o 00:02:12.317 CC lib/nvmf/transport.o 00:02:12.317 CC lib/scsi/scsi.o 00:02:12.317 CC lib/ftl/ftl_sb.o 00:02:12.317 CC lib/nvmf/tcp.o 00:02:12.317 CC lib/scsi/scsi_bdev.o 00:02:12.317 CC lib/ftl/ftl_l2p.o 00:02:12.317 CC lib/nvmf/stubs.o 00:02:12.317 CC lib/ftl/ftl_l2p_flat.o 00:02:12.317 CC lib/scsi/scsi_pr.o 00:02:12.317 CC lib/nvmf/mdns_server.o 00:02:12.317 CC lib/ftl/ftl_nv_cache.o 00:02:12.317 CC lib/scsi/scsi_rpc.o 00:02:12.317 CC lib/nvmf/vfio_user.o 00:02:12.317 CC lib/ftl/ftl_band.o 00:02:12.317 CC lib/nvmf/rdma.o 00:02:12.317 CC lib/ftl/ftl_band_ops.o 00:02:12.317 CC lib/scsi/task.o 00:02:12.317 CC lib/nvmf/auth.o 00:02:12.317 CC lib/ftl/ftl_writer.o 00:02:12.317 CC lib/ftl/ftl_rq.o 00:02:12.317 CC lib/ftl/ftl_reloc.o 00:02:12.317 CC lib/ftl/ftl_l2p_cache.o 00:02:12.317 CC lib/ftl/ftl_p2l.o 00:02:12.317 CC lib/ftl/ftl_p2l_log.o 00:02:12.317 CC lib/ftl/mngt/ftl_mngt.o 00:02:12.317 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:12.317 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:12.317 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:12.317 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:12.317 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:12.317 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:12.317 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:12.317 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:12.317 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:12.317 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:12.317 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:12.317 CC lib/ftl/utils/ftl_conf.o 00:02:12.317 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:12.317 CC lib/ftl/utils/ftl_md.o 00:02:12.317 CC lib/ftl/utils/ftl_mempool.o 00:02:12.317 CC lib/ftl/utils/ftl_bitmap.o 00:02:12.317 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:12.317 CC lib/ftl/utils/ftl_property.o 00:02:12.317 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:12.317 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:12.317 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:12.317 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:12.583 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:12.583 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:12.583 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:12.583 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:12.583 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:12.583 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:12.583 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:12.583 CC lib/ftl/base/ftl_base_dev.o 00:02:12.583 CC lib/ftl/base/ftl_base_bdev.o 00:02:12.583 CC lib/ftl/ftl_trace.o 00:02:12.583 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:13.153 LIB libspdk_nbd.a 00:02:13.153 SO libspdk_nbd.so.7.0 00:02:13.153 LIB libspdk_scsi.a 00:02:13.153 SYMLINK libspdk_nbd.so 00:02:13.414 LIB libspdk_ublk.a 00:02:13.414 SO libspdk_scsi.so.9.0 00:02:13.414 SO libspdk_ublk.so.3.0 00:02:13.414 SYMLINK libspdk_scsi.so 00:02:13.414 SYMLINK libspdk_ublk.so 00:02:13.675 LIB libspdk_ftl.a 00:02:13.675 CC lib/iscsi/conn.o 00:02:13.675 CC lib/iscsi/init_grp.o 00:02:13.675 CC lib/iscsi/iscsi.o 00:02:13.675 CC lib/iscsi/param.o 00:02:13.675 CC lib/iscsi/portal_grp.o 00:02:13.675 CC lib/iscsi/tgt_node.o 00:02:13.675 CC lib/vhost/vhost.o 00:02:13.675 CC lib/iscsi/iscsi_subsystem.o 00:02:13.675 CC lib/vhost/vhost_rpc.o 00:02:13.675 CC lib/iscsi/iscsi_rpc.o 00:02:13.675 CC lib/vhost/vhost_scsi.o 00:02:13.675 CC lib/iscsi/task.o 00:02:13.675 CC lib/vhost/vhost_blk.o 00:02:13.675 CC lib/vhost/rte_vhost_user.o 00:02:13.675 SO libspdk_ftl.so.9.0 00:02:14.247 SYMLINK libspdk_ftl.so 00:02:14.508 LIB libspdk_nvmf.a 00:02:14.768 SO libspdk_nvmf.so.20.0 00:02:14.768 LIB libspdk_vhost.a 00:02:14.768 SO libspdk_vhost.so.8.0 00:02:14.768 SYMLINK libspdk_nvmf.so 00:02:15.029 SYMLINK libspdk_vhost.so 00:02:15.029 LIB libspdk_iscsi.a 00:02:15.029 SO libspdk_iscsi.so.8.0 00:02:15.290 SYMLINK libspdk_iscsi.so 00:02:15.862 CC module/env_dpdk/env_dpdk_rpc.o 00:02:15.862 CC module/vfu_device/vfu_virtio.o 00:02:15.862 CC module/vfu_device/vfu_virtio_blk.o 00:02:15.862 CC module/vfu_device/vfu_virtio_scsi.o 00:02:15.862 CC module/vfu_device/vfu_virtio_rpc.o 00:02:15.862 CC module/vfu_device/vfu_virtio_fs.o 00:02:15.862 LIB libspdk_env_dpdk_rpc.a 00:02:15.862 CC module/sock/posix/posix.o 00:02:15.862 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:15.862 CC module/accel/iaa/accel_iaa.o 00:02:15.862 CC module/accel/iaa/accel_iaa_rpc.o 00:02:15.862 CC module/accel/dsa/accel_dsa.o 00:02:15.862 CC module/accel/dsa/accel_dsa_rpc.o 00:02:16.124 CC module/accel/error/accel_error.o 00:02:16.124 CC module/accel/error/accel_error_rpc.o 00:02:16.124 CC module/accel/ioat/accel_ioat.o 00:02:16.124 CC module/accel/ioat/accel_ioat_rpc.o 00:02:16.124 CC module/keyring/linux/keyring.o 00:02:16.124 CC module/keyring/linux/keyring_rpc.o 00:02:16.124 SO libspdk_env_dpdk_rpc.so.6.0 00:02:16.124 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:16.124 CC module/blob/bdev/blob_bdev.o 00:02:16.124 CC module/fsdev/aio/fsdev_aio.o 00:02:16.124 CC module/scheduler/gscheduler/gscheduler.o 00:02:16.124 CC module/fsdev/aio/linux_aio_mgr.o 00:02:16.124 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:16.124 CC module/keyring/file/keyring.o 00:02:16.124 CC module/keyring/file/keyring_rpc.o 00:02:16.124 SYMLINK libspdk_env_dpdk_rpc.so 00:02:16.124 LIB libspdk_scheduler_gscheduler.a 00:02:16.124 LIB libspdk_keyring_linux.a 00:02:16.124 LIB libspdk_keyring_file.a 00:02:16.124 LIB libspdk_scheduler_dpdk_governor.a 00:02:16.124 LIB libspdk_accel_ioat.a 00:02:16.124 LIB libspdk_scheduler_dynamic.a 00:02:16.124 LIB libspdk_accel_iaa.a 00:02:16.124 LIB libspdk_accel_error.a 00:02:16.124 SO libspdk_scheduler_gscheduler.so.4.0 00:02:16.124 SO libspdk_keyring_file.so.2.0 00:02:16.385 SO libspdk_keyring_linux.so.1.0 00:02:16.385 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:16.385 SO libspdk_scheduler_dynamic.so.4.0 00:02:16.385 SO libspdk_accel_ioat.so.6.0 00:02:16.385 SO libspdk_accel_iaa.so.3.0 00:02:16.385 SO libspdk_accel_error.so.2.0 00:02:16.385 SYMLINK libspdk_scheduler_gscheduler.so 00:02:16.385 SYMLINK libspdk_keyring_file.so 00:02:16.385 SYMLINK libspdk_keyring_linux.so 00:02:16.385 LIB libspdk_accel_dsa.a 00:02:16.385 LIB libspdk_blob_bdev.a 00:02:16.385 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:16.385 SYMLINK libspdk_scheduler_dynamic.so 00:02:16.385 SO libspdk_accel_dsa.so.5.0 00:02:16.385 SYMLINK libspdk_accel_ioat.so 00:02:16.385 SYMLINK libspdk_accel_iaa.so 00:02:16.385 SYMLINK libspdk_accel_error.so 00:02:16.385 SO libspdk_blob_bdev.so.11.0 00:02:16.385 LIB libspdk_vfu_device.a 00:02:16.385 SYMLINK libspdk_accel_dsa.so 00:02:16.385 SYMLINK libspdk_blob_bdev.so 00:02:16.385 SO libspdk_vfu_device.so.3.0 00:02:16.647 SYMLINK libspdk_vfu_device.so 00:02:16.647 LIB libspdk_fsdev_aio.a 00:02:16.647 LIB libspdk_sock_posix.a 00:02:16.647 SO libspdk_fsdev_aio.so.1.0 00:02:16.647 SO libspdk_sock_posix.so.6.0 00:02:16.909 SYMLINK libspdk_fsdev_aio.so 00:02:16.909 SYMLINK libspdk_sock_posix.so 00:02:16.909 CC module/bdev/gpt/gpt.o 00:02:16.909 CC module/bdev/gpt/vbdev_gpt.o 00:02:16.909 CC module/bdev/nvme/bdev_nvme.o 00:02:16.909 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:16.909 CC module/bdev/lvol/vbdev_lvol.o 00:02:16.909 CC module/bdev/nvme/nvme_rpc.o 00:02:16.909 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:16.909 CC module/bdev/nvme/bdev_mdns_client.o 00:02:16.909 CC module/bdev/delay/vbdev_delay.o 00:02:16.909 CC module/bdev/nvme/vbdev_opal.o 00:02:16.909 CC module/blobfs/bdev/blobfs_bdev.o 00:02:16.909 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:16.909 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:16.909 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:16.909 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:16.909 CC module/bdev/error/vbdev_error.o 00:02:16.909 CC module/bdev/passthru/vbdev_passthru.o 00:02:16.909 CC module/bdev/error/vbdev_error_rpc.o 00:02:16.909 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:16.909 CC module/bdev/raid/bdev_raid_rpc.o 00:02:16.909 CC module/bdev/raid/bdev_raid.o 00:02:16.909 CC module/bdev/raid/bdev_raid_sb.o 00:02:16.909 CC module/bdev/raid/raid0.o 00:02:16.909 CC module/bdev/malloc/bdev_malloc.o 00:02:16.909 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:16.909 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:16.909 CC module/bdev/raid/raid1.o 00:02:16.909 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:16.909 CC module/bdev/raid/concat.o 00:02:16.909 CC module/bdev/null/bdev_null.o 00:02:16.909 CC module/bdev/null/bdev_null_rpc.o 00:02:16.909 CC module/bdev/aio/bdev_aio.o 00:02:16.909 CC module/bdev/aio/bdev_aio_rpc.o 00:02:16.909 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:16.909 CC module/bdev/iscsi/bdev_iscsi.o 00:02:16.909 CC module/bdev/split/vbdev_split.o 00:02:16.909 CC module/bdev/split/vbdev_split_rpc.o 00:02:16.909 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:16.909 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:16.909 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:16.909 CC module/bdev/ftl/bdev_ftl.o 00:02:16.909 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:17.480 LIB libspdk_bdev_gpt.a 00:02:17.480 LIB libspdk_blobfs_bdev.a 00:02:17.480 LIB libspdk_bdev_split.a 00:02:17.480 SO libspdk_blobfs_bdev.so.6.0 00:02:17.480 SO libspdk_bdev_gpt.so.6.0 00:02:17.480 SO libspdk_bdev_split.so.6.0 00:02:17.480 LIB libspdk_bdev_null.a 00:02:17.480 LIB libspdk_bdev_passthru.a 00:02:17.480 LIB libspdk_bdev_error.a 00:02:17.480 SYMLINK libspdk_blobfs_bdev.so 00:02:17.480 LIB libspdk_bdev_ftl.a 00:02:17.480 LIB libspdk_bdev_zone_block.a 00:02:17.480 SO libspdk_bdev_null.so.6.0 00:02:17.480 SYMLINK libspdk_bdev_gpt.so 00:02:17.480 LIB libspdk_bdev_delay.a 00:02:17.480 SYMLINK libspdk_bdev_split.so 00:02:17.480 SO libspdk_bdev_passthru.so.6.0 00:02:17.480 SO libspdk_bdev_error.so.6.0 00:02:17.480 LIB libspdk_bdev_aio.a 00:02:17.480 LIB libspdk_bdev_malloc.a 00:02:17.480 SO libspdk_bdev_ftl.so.6.0 00:02:17.480 SO libspdk_bdev_zone_block.so.6.0 00:02:17.480 LIB libspdk_bdev_iscsi.a 00:02:17.480 SO libspdk_bdev_delay.so.6.0 00:02:17.480 SO libspdk_bdev_aio.so.6.0 00:02:17.480 SO libspdk_bdev_malloc.so.6.0 00:02:17.480 SYMLINK libspdk_bdev_null.so 00:02:17.480 SYMLINK libspdk_bdev_passthru.so 00:02:17.480 SO libspdk_bdev_iscsi.so.6.0 00:02:17.480 SYMLINK libspdk_bdev_error.so 00:02:17.480 SYMLINK libspdk_bdev_ftl.so 00:02:17.480 SYMLINK libspdk_bdev_zone_block.so 00:02:17.480 SYMLINK libspdk_bdev_delay.so 00:02:17.480 SYMLINK libspdk_bdev_aio.so 00:02:17.480 LIB libspdk_bdev_lvol.a 00:02:17.741 SYMLINK libspdk_bdev_malloc.so 00:02:17.741 SYMLINK libspdk_bdev_iscsi.so 00:02:17.741 LIB libspdk_bdev_virtio.a 00:02:17.741 SO libspdk_bdev_lvol.so.6.0 00:02:17.741 SO libspdk_bdev_virtio.so.6.0 00:02:17.741 SYMLINK libspdk_bdev_lvol.so 00:02:17.741 SYMLINK libspdk_bdev_virtio.so 00:02:18.002 LIB libspdk_bdev_raid.a 00:02:18.002 SO libspdk_bdev_raid.so.6.0 00:02:18.263 SYMLINK libspdk_bdev_raid.so 00:02:19.207 LIB libspdk_bdev_nvme.a 00:02:19.469 SO libspdk_bdev_nvme.so.7.1 00:02:19.469 SYMLINK libspdk_bdev_nvme.so 00:02:20.414 CC module/event/subsystems/iobuf/iobuf.o 00:02:20.414 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:20.414 CC module/event/subsystems/vmd/vmd.o 00:02:20.414 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:20.414 CC module/event/subsystems/keyring/keyring.o 00:02:20.414 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:20.414 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:20.414 CC module/event/subsystems/scheduler/scheduler.o 00:02:20.414 CC module/event/subsystems/sock/sock.o 00:02:20.414 CC module/event/subsystems/fsdev/fsdev.o 00:02:20.414 LIB libspdk_event_keyring.a 00:02:20.414 LIB libspdk_event_iobuf.a 00:02:20.414 LIB libspdk_event_vmd.a 00:02:20.414 LIB libspdk_event_vhost_blk.a 00:02:20.414 LIB libspdk_event_vfu_tgt.a 00:02:20.414 LIB libspdk_event_scheduler.a 00:02:20.414 LIB libspdk_event_sock.a 00:02:20.414 LIB libspdk_event_fsdev.a 00:02:20.414 SO libspdk_event_keyring.so.1.0 00:02:20.414 SO libspdk_event_iobuf.so.3.0 00:02:20.414 SO libspdk_event_vmd.so.6.0 00:02:20.414 SO libspdk_event_vhost_blk.so.3.0 00:02:20.414 SO libspdk_event_vfu_tgt.so.3.0 00:02:20.414 SO libspdk_event_fsdev.so.1.0 00:02:20.414 SO libspdk_event_scheduler.so.4.0 00:02:20.414 SO libspdk_event_sock.so.5.0 00:02:20.414 SYMLINK libspdk_event_keyring.so 00:02:20.414 SYMLINK libspdk_event_iobuf.so 00:02:20.414 SYMLINK libspdk_event_vfu_tgt.so 00:02:20.414 SYMLINK libspdk_event_vhost_blk.so 00:02:20.414 SYMLINK libspdk_event_fsdev.so 00:02:20.414 SYMLINK libspdk_event_sock.so 00:02:20.414 SYMLINK libspdk_event_scheduler.so 00:02:20.414 SYMLINK libspdk_event_vmd.so 00:02:20.987 CC module/event/subsystems/accel/accel.o 00:02:20.987 LIB libspdk_event_accel.a 00:02:20.987 SO libspdk_event_accel.so.6.0 00:02:21.248 SYMLINK libspdk_event_accel.so 00:02:21.509 CC module/event/subsystems/bdev/bdev.o 00:02:21.770 LIB libspdk_event_bdev.a 00:02:21.770 SO libspdk_event_bdev.so.6.0 00:02:21.770 SYMLINK libspdk_event_bdev.so 00:02:22.031 CC module/event/subsystems/scsi/scsi.o 00:02:22.031 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:22.031 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:22.031 CC module/event/subsystems/ublk/ublk.o 00:02:22.031 CC module/event/subsystems/nbd/nbd.o 00:02:22.292 LIB libspdk_event_ublk.a 00:02:22.292 LIB libspdk_event_nbd.a 00:02:22.292 LIB libspdk_event_scsi.a 00:02:22.292 SO libspdk_event_ublk.so.3.0 00:02:22.292 SO libspdk_event_nbd.so.6.0 00:02:22.292 SO libspdk_event_scsi.so.6.0 00:02:22.292 LIB libspdk_event_nvmf.a 00:02:22.554 SYMLINK libspdk_event_ublk.so 00:02:22.554 SYMLINK libspdk_event_nbd.so 00:02:22.554 SYMLINK libspdk_event_scsi.so 00:02:22.554 SO libspdk_event_nvmf.so.6.0 00:02:22.554 SYMLINK libspdk_event_nvmf.so 00:02:22.815 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:22.815 CC module/event/subsystems/iscsi/iscsi.o 00:02:23.076 LIB libspdk_event_vhost_scsi.a 00:02:23.076 LIB libspdk_event_iscsi.a 00:02:23.076 SO libspdk_event_vhost_scsi.so.3.0 00:02:23.076 SO libspdk_event_iscsi.so.6.0 00:02:23.076 SYMLINK libspdk_event_vhost_scsi.so 00:02:23.076 SYMLINK libspdk_event_iscsi.so 00:02:23.338 SO libspdk.so.6.0 00:02:23.338 SYMLINK libspdk.so 00:02:23.599 CXX app/trace/trace.o 00:02:23.599 CC app/trace_record/trace_record.o 00:02:23.599 CC app/spdk_top/spdk_top.o 00:02:23.599 CC app/spdk_lspci/spdk_lspci.o 00:02:23.599 CC app/spdk_nvme_perf/perf.o 00:02:23.599 CC app/spdk_nvme_identify/identify.o 00:02:23.599 CC test/rpc_client/rpc_client_test.o 00:02:23.599 CC app/spdk_nvme_discover/discovery_aer.o 00:02:23.599 TEST_HEADER include/spdk/accel_module.h 00:02:23.599 TEST_HEADER include/spdk/accel.h 00:02:23.599 TEST_HEADER include/spdk/assert.h 00:02:23.866 TEST_HEADER include/spdk/barrier.h 00:02:23.866 TEST_HEADER include/spdk/base64.h 00:02:23.866 TEST_HEADER include/spdk/bdev.h 00:02:23.866 TEST_HEADER include/spdk/bdev_zone.h 00:02:23.866 TEST_HEADER include/spdk/bdev_module.h 00:02:23.866 TEST_HEADER include/spdk/bit_array.h 00:02:23.866 TEST_HEADER include/spdk/bit_pool.h 00:02:23.866 TEST_HEADER include/spdk/blob_bdev.h 00:02:23.866 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:23.866 TEST_HEADER include/spdk/blobfs.h 00:02:23.866 TEST_HEADER include/spdk/blob.h 00:02:23.866 TEST_HEADER include/spdk/conf.h 00:02:23.866 TEST_HEADER include/spdk/config.h 00:02:23.866 TEST_HEADER include/spdk/crc16.h 00:02:23.866 TEST_HEADER include/spdk/cpuset.h 00:02:23.866 TEST_HEADER include/spdk/crc32.h 00:02:23.866 TEST_HEADER include/spdk/crc64.h 00:02:23.866 TEST_HEADER include/spdk/dif.h 00:02:23.866 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:23.866 TEST_HEADER include/spdk/dma.h 00:02:23.866 TEST_HEADER include/spdk/endian.h 00:02:23.866 TEST_HEADER include/spdk/env.h 00:02:23.866 TEST_HEADER include/spdk/env_dpdk.h 00:02:23.866 TEST_HEADER include/spdk/event.h 00:02:23.866 TEST_HEADER include/spdk/fd_group.h 00:02:23.866 TEST_HEADER include/spdk/fd.h 00:02:23.866 CC app/nvmf_tgt/nvmf_main.o 00:02:23.866 TEST_HEADER include/spdk/file.h 00:02:23.866 CC app/iscsi_tgt/iscsi_tgt.o 00:02:23.866 TEST_HEADER include/spdk/fsdev_module.h 00:02:23.866 TEST_HEADER include/spdk/ftl.h 00:02:23.866 TEST_HEADER include/spdk/fsdev.h 00:02:23.866 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:23.866 CC app/spdk_dd/spdk_dd.o 00:02:23.866 TEST_HEADER include/spdk/gpt_spec.h 00:02:23.866 TEST_HEADER include/spdk/hexlify.h 00:02:23.866 TEST_HEADER include/spdk/histogram_data.h 00:02:23.866 TEST_HEADER include/spdk/idxd_spec.h 00:02:23.866 TEST_HEADER include/spdk/idxd.h 00:02:23.866 TEST_HEADER include/spdk/init.h 00:02:23.866 TEST_HEADER include/spdk/ioat.h 00:02:23.866 TEST_HEADER include/spdk/ioat_spec.h 00:02:23.866 TEST_HEADER include/spdk/json.h 00:02:23.866 TEST_HEADER include/spdk/iscsi_spec.h 00:02:23.866 TEST_HEADER include/spdk/keyring.h 00:02:23.866 TEST_HEADER include/spdk/jsonrpc.h 00:02:23.866 TEST_HEADER include/spdk/keyring_module.h 00:02:23.866 TEST_HEADER include/spdk/likely.h 00:02:23.866 CC app/spdk_tgt/spdk_tgt.o 00:02:23.866 TEST_HEADER include/spdk/log.h 00:02:23.866 TEST_HEADER include/spdk/lvol.h 00:02:23.866 TEST_HEADER include/spdk/md5.h 00:02:23.866 TEST_HEADER include/spdk/memory.h 00:02:23.866 TEST_HEADER include/spdk/nbd.h 00:02:23.866 TEST_HEADER include/spdk/mmio.h 00:02:23.866 TEST_HEADER include/spdk/net.h 00:02:23.866 TEST_HEADER include/spdk/notify.h 00:02:23.866 TEST_HEADER include/spdk/nvme.h 00:02:23.866 TEST_HEADER include/spdk/nvme_intel.h 00:02:23.866 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:23.866 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:23.866 TEST_HEADER include/spdk/nvme_spec.h 00:02:23.866 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:23.866 TEST_HEADER include/spdk/nvme_zns.h 00:02:23.866 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:23.866 TEST_HEADER include/spdk/nvmf_transport.h 00:02:23.866 TEST_HEADER include/spdk/nvmf.h 00:02:23.866 TEST_HEADER include/spdk/nvmf_spec.h 00:02:23.866 TEST_HEADER include/spdk/opal.h 00:02:23.866 TEST_HEADER include/spdk/pci_ids.h 00:02:23.866 TEST_HEADER include/spdk/opal_spec.h 00:02:23.866 TEST_HEADER include/spdk/pipe.h 00:02:23.866 TEST_HEADER include/spdk/queue.h 00:02:23.866 TEST_HEADER include/spdk/reduce.h 00:02:23.866 TEST_HEADER include/spdk/rpc.h 00:02:23.866 TEST_HEADER include/spdk/scheduler.h 00:02:23.866 TEST_HEADER include/spdk/scsi.h 00:02:23.866 TEST_HEADER include/spdk/scsi_spec.h 00:02:23.866 TEST_HEADER include/spdk/sock.h 00:02:23.866 TEST_HEADER include/spdk/stdinc.h 00:02:23.866 TEST_HEADER include/spdk/string.h 00:02:23.866 TEST_HEADER include/spdk/thread.h 00:02:23.866 TEST_HEADER include/spdk/trace_parser.h 00:02:23.866 TEST_HEADER include/spdk/trace.h 00:02:23.866 TEST_HEADER include/spdk/ublk.h 00:02:23.866 TEST_HEADER include/spdk/tree.h 00:02:23.866 TEST_HEADER include/spdk/uuid.h 00:02:23.866 TEST_HEADER include/spdk/util.h 00:02:23.866 TEST_HEADER include/spdk/version.h 00:02:23.866 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:23.867 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:23.867 TEST_HEADER include/spdk/vhost.h 00:02:23.867 TEST_HEADER include/spdk/vmd.h 00:02:23.867 TEST_HEADER include/spdk/xor.h 00:02:23.867 TEST_HEADER include/spdk/zipf.h 00:02:23.867 CXX test/cpp_headers/accel.o 00:02:23.867 CXX test/cpp_headers/accel_module.o 00:02:23.867 CXX test/cpp_headers/assert.o 00:02:23.867 CXX test/cpp_headers/barrier.o 00:02:23.867 CXX test/cpp_headers/base64.o 00:02:23.867 CXX test/cpp_headers/bdev.o 00:02:23.867 CXX test/cpp_headers/bdev_module.o 00:02:23.867 CXX test/cpp_headers/bit_array.o 00:02:23.867 CXX test/cpp_headers/bdev_zone.o 00:02:23.867 CXX test/cpp_headers/bit_pool.o 00:02:23.867 CXX test/cpp_headers/blobfs_bdev.o 00:02:23.867 CXX test/cpp_headers/blob_bdev.o 00:02:23.867 CXX test/cpp_headers/blobfs.o 00:02:23.867 CXX test/cpp_headers/conf.o 00:02:23.867 CXX test/cpp_headers/blob.o 00:02:23.867 CXX test/cpp_headers/config.o 00:02:23.867 CXX test/cpp_headers/crc16.o 00:02:23.867 CXX test/cpp_headers/cpuset.o 00:02:23.867 CXX test/cpp_headers/crc32.o 00:02:23.867 CXX test/cpp_headers/dif.o 00:02:23.867 CXX test/cpp_headers/crc64.o 00:02:23.867 CXX test/cpp_headers/endian.o 00:02:23.867 CXX test/cpp_headers/dma.o 00:02:23.867 CXX test/cpp_headers/env_dpdk.o 00:02:23.867 CXX test/cpp_headers/env.o 00:02:23.867 CXX test/cpp_headers/fd.o 00:02:23.867 CXX test/cpp_headers/event.o 00:02:23.867 CXX test/cpp_headers/fd_group.o 00:02:23.867 CXX test/cpp_headers/fsdev.o 00:02:23.867 CXX test/cpp_headers/fsdev_module.o 00:02:23.867 CXX test/cpp_headers/file.o 00:02:23.867 CXX test/cpp_headers/ftl.o 00:02:23.867 CXX test/cpp_headers/gpt_spec.o 00:02:23.867 CXX test/cpp_headers/fuse_dispatcher.o 00:02:23.867 CXX test/cpp_headers/hexlify.o 00:02:23.867 CXX test/cpp_headers/idxd.o 00:02:23.867 CXX test/cpp_headers/histogram_data.o 00:02:23.867 CXX test/cpp_headers/idxd_spec.o 00:02:23.867 CXX test/cpp_headers/init.o 00:02:23.867 CXX test/cpp_headers/ioat_spec.o 00:02:23.867 CXX test/cpp_headers/iscsi_spec.o 00:02:23.867 CXX test/cpp_headers/ioat.o 00:02:23.867 CXX test/cpp_headers/jsonrpc.o 00:02:23.867 CXX test/cpp_headers/json.o 00:02:23.867 CXX test/cpp_headers/keyring_module.o 00:02:23.867 CXX test/cpp_headers/log.o 00:02:23.867 CXX test/cpp_headers/likely.o 00:02:23.867 CXX test/cpp_headers/md5.o 00:02:23.867 CXX test/cpp_headers/keyring.o 00:02:23.867 CXX test/cpp_headers/lvol.o 00:02:23.867 CXX test/cpp_headers/nbd.o 00:02:23.867 CXX test/cpp_headers/notify.o 00:02:23.867 CXX test/cpp_headers/nvme.o 00:02:23.867 CXX test/cpp_headers/memory.o 00:02:23.867 CXX test/cpp_headers/mmio.o 00:02:23.867 CXX test/cpp_headers/nvme_intel.o 00:02:23.867 CXX test/cpp_headers/net.o 00:02:23.867 CXX test/cpp_headers/nvme_ocssd.o 00:02:24.139 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:24.139 CXX test/cpp_headers/nvme_spec.o 00:02:24.139 CXX test/cpp_headers/nvmf_cmd.o 00:02:24.139 CXX test/cpp_headers/nvme_zns.o 00:02:24.139 CXX test/cpp_headers/nvmf.o 00:02:24.139 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:24.139 CXX test/cpp_headers/nvmf_spec.o 00:02:24.139 CC examples/util/zipf/zipf.o 00:02:24.139 CC test/env/memory/memory_ut.o 00:02:24.139 CXX test/cpp_headers/nvmf_transport.o 00:02:24.139 CC examples/ioat/perf/perf.o 00:02:24.139 CXX test/cpp_headers/opal_spec.o 00:02:24.139 CXX test/cpp_headers/opal.o 00:02:24.139 CXX test/cpp_headers/pci_ids.o 00:02:24.139 CXX test/cpp_headers/rpc.o 00:02:24.139 CC examples/ioat/verify/verify.o 00:02:24.139 CXX test/cpp_headers/pipe.o 00:02:24.139 CXX test/cpp_headers/queue.o 00:02:24.139 CXX test/cpp_headers/scsi_spec.o 00:02:24.139 CXX test/cpp_headers/reduce.o 00:02:24.139 CC test/thread/poller_perf/poller_perf.o 00:02:24.139 CXX test/cpp_headers/scheduler.o 00:02:24.139 CXX test/cpp_headers/scsi.o 00:02:24.139 CC test/app/histogram_perf/histogram_perf.o 00:02:24.139 CXX test/cpp_headers/sock.o 00:02:24.139 CXX test/cpp_headers/string.o 00:02:24.139 CXX test/cpp_headers/stdinc.o 00:02:24.139 CC app/fio/nvme/fio_plugin.o 00:02:24.139 CXX test/cpp_headers/trace.o 00:02:24.139 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:24.139 CXX test/cpp_headers/thread.o 00:02:24.139 CXX test/cpp_headers/trace_parser.o 00:02:24.139 CXX test/cpp_headers/ublk.o 00:02:24.139 CXX test/cpp_headers/util.o 00:02:24.139 CXX test/cpp_headers/tree.o 00:02:24.139 CC test/dma/test_dma/test_dma.o 00:02:24.139 CXX test/cpp_headers/uuid.o 00:02:24.139 CXX test/cpp_headers/version.o 00:02:24.139 CXX test/cpp_headers/xor.o 00:02:24.139 CXX test/cpp_headers/zipf.o 00:02:24.139 CXX test/cpp_headers/vfio_user_pci.o 00:02:24.139 CXX test/cpp_headers/vfio_user_spec.o 00:02:24.139 CXX test/cpp_headers/vmd.o 00:02:24.139 CXX test/cpp_headers/vhost.o 00:02:24.139 CC test/env/vtophys/vtophys.o 00:02:24.139 LINK spdk_lspci 00:02:24.139 CC test/app/stub/stub.o 00:02:24.139 LINK nvmf_tgt 00:02:24.139 CC test/env/pci/pci_ut.o 00:02:24.139 LINK rpc_client_test 00:02:24.139 CC test/app/jsoncat/jsoncat.o 00:02:24.139 LINK spdk_nvme_discover 00:02:24.139 LINK interrupt_tgt 00:02:24.139 CC app/fio/bdev/fio_plugin.o 00:02:24.139 CC test/app/bdev_svc/bdev_svc.o 00:02:24.409 LINK spdk_tgt 00:02:24.409 LINK spdk_trace_record 00:02:24.409 LINK iscsi_tgt 00:02:24.687 LINK poller_perf 00:02:24.687 LINK jsoncat 00:02:24.687 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:24.687 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:24.953 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:24.953 CC test/env/mem_callbacks/mem_callbacks.o 00:02:24.953 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:24.953 LINK spdk_dd 00:02:24.953 LINK zipf 00:02:25.214 LINK verify 00:02:25.214 LINK stub 00:02:25.214 LINK ioat_perf 00:02:25.214 LINK histogram_perf 00:02:25.475 LINK bdev_svc 00:02:25.475 LINK env_dpdk_post_init 00:02:25.475 LINK vtophys 00:02:25.475 CC test/event/event_perf/event_perf.o 00:02:25.475 CC test/event/reactor/reactor.o 00:02:25.475 CC test/event/reactor_perf/reactor_perf.o 00:02:25.475 LINK spdk_trace 00:02:25.475 CC test/event/app_repeat/app_repeat.o 00:02:25.475 CC test/event/scheduler/scheduler.o 00:02:25.475 LINK nvme_fuzz 00:02:25.737 LINK reactor 00:02:25.737 LINK event_perf 00:02:25.737 LINK reactor_perf 00:02:25.737 LINK spdk_nvme_identify 00:02:25.737 LINK spdk_nvme_perf 00:02:25.737 CC examples/vmd/led/led.o 00:02:25.737 LINK pci_ut 00:02:25.737 CC examples/idxd/perf/perf.o 00:02:25.737 LINK app_repeat 00:02:25.737 LINK mem_callbacks 00:02:25.737 CC examples/vmd/lsvmd/lsvmd.o 00:02:25.737 CC examples/sock/hello_world/hello_sock.o 00:02:25.737 CC examples/thread/thread/thread_ex.o 00:02:25.737 LINK spdk_bdev 00:02:25.737 LINK spdk_nvme 00:02:25.737 LINK vhost_fuzz 00:02:25.737 LINK spdk_top 00:02:25.737 LINK scheduler 00:02:25.737 LINK test_dma 00:02:25.737 LINK led 00:02:25.999 LINK lsvmd 00:02:25.999 CC app/vhost/vhost.o 00:02:25.999 LINK hello_sock 00:02:25.999 LINK idxd_perf 00:02:25.999 LINK thread 00:02:25.999 LINK vhost 00:02:26.260 LINK memory_ut 00:02:26.522 CC test/nvme/reset/reset.o 00:02:26.522 CC test/nvme/e2edp/nvme_dp.o 00:02:26.522 CC test/nvme/sgl/sgl.o 00:02:26.522 CC test/nvme/startup/startup.o 00:02:26.522 CC test/nvme/overhead/overhead.o 00:02:26.522 CC test/nvme/aer/aer.o 00:02:26.522 CC test/nvme/compliance/nvme_compliance.o 00:02:26.522 CC test/nvme/reserve/reserve.o 00:02:26.522 CC test/nvme/err_injection/err_injection.o 00:02:26.522 CC test/nvme/connect_stress/connect_stress.o 00:02:26.522 CC test/nvme/fdp/fdp.o 00:02:26.522 CC test/nvme/boot_partition/boot_partition.o 00:02:26.522 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:26.522 CC test/nvme/cuse/cuse.o 00:02:26.522 CC test/nvme/simple_copy/simple_copy.o 00:02:26.522 CC test/nvme/fused_ordering/fused_ordering.o 00:02:26.522 CC test/accel/dif/dif.o 00:02:26.522 CC test/blobfs/mkfs/mkfs.o 00:02:26.522 CC examples/nvme/reconnect/reconnect.o 00:02:26.522 CC examples/nvme/arbitration/arbitration.o 00:02:26.522 CC examples/nvme/hello_world/hello_world.o 00:02:26.522 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:26.522 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:26.522 CC examples/nvme/hotplug/hotplug.o 00:02:26.522 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:26.522 CC examples/nvme/abort/abort.o 00:02:26.783 CC test/lvol/esnap/esnap.o 00:02:26.783 LINK startup 00:02:26.783 LINK err_injection 00:02:26.783 LINK iscsi_fuzz 00:02:26.783 LINK boot_partition 00:02:26.783 CC examples/accel/perf/accel_perf.o 00:02:26.783 LINK reserve 00:02:26.783 LINK connect_stress 00:02:26.783 CC examples/blob/hello_world/hello_blob.o 00:02:26.783 LINK doorbell_aers 00:02:26.783 LINK fused_ordering 00:02:26.783 CC examples/blob/cli/blobcli.o 00:02:26.783 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:26.783 LINK mkfs 00:02:26.783 LINK reset 00:02:26.783 LINK simple_copy 00:02:26.783 LINK nvme_dp 00:02:26.783 LINK sgl 00:02:26.783 LINK cmb_copy 00:02:26.783 LINK overhead 00:02:26.783 LINK aer 00:02:26.783 LINK nvme_compliance 00:02:26.783 LINK pmr_persistence 00:02:26.783 LINK hello_world 00:02:26.783 LINK fdp 00:02:26.783 LINK hotplug 00:02:27.043 LINK arbitration 00:02:27.043 LINK reconnect 00:02:27.043 LINK abort 00:02:27.043 LINK hello_blob 00:02:27.043 LINK hello_fsdev 00:02:27.043 LINK nvme_manage 00:02:27.304 LINK dif 00:02:27.304 LINK accel_perf 00:02:27.304 LINK blobcli 00:02:27.876 LINK cuse 00:02:27.876 CC test/bdev/bdevio/bdevio.o 00:02:27.876 CC examples/bdev/hello_world/hello_bdev.o 00:02:27.876 CC examples/bdev/bdevperf/bdevperf.o 00:02:28.137 LINK hello_bdev 00:02:28.137 LINK bdevio 00:02:28.709 LINK bdevperf 00:02:29.282 CC examples/nvmf/nvmf/nvmf.o 00:02:29.543 LINK nvmf 00:02:30.487 LINK esnap 00:02:31.060 00:02:31.060 real 0m56.649s 00:02:31.060 user 8m10.816s 00:02:31.060 sys 6m12.043s 00:02:31.060 07:16:48 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:31.060 07:16:48 make -- common/autotest_common.sh@10 -- $ set +x 00:02:31.060 ************************************ 00:02:31.060 END TEST make 00:02:31.060 ************************************ 00:02:31.060 07:16:49 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:31.060 07:16:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:31.060 07:16:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:31.060 07:16:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.060 07:16:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:31.060 07:16:49 -- pm/common@44 -- $ pid=3069302 00:02:31.060 07:16:49 -- pm/common@50 -- $ kill -TERM 3069302 00:02:31.060 07:16:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.060 07:16:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:31.060 07:16:49 -- pm/common@44 -- $ pid=3069303 00:02:31.060 07:16:49 -- pm/common@50 -- $ kill -TERM 3069303 00:02:31.060 07:16:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.060 07:16:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:31.060 07:16:49 -- pm/common@44 -- $ pid=3069305 00:02:31.060 07:16:49 -- pm/common@50 -- $ kill -TERM 3069305 00:02:31.060 07:16:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.060 07:16:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:31.060 07:16:49 -- pm/common@44 -- $ pid=3069328 00:02:31.060 07:16:49 -- pm/common@50 -- $ sudo -E kill -TERM 3069328 00:02:31.060 07:16:49 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:31.060 07:16:49 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:31.060 07:16:49 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:31.060 07:16:49 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:31.060 07:16:49 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:31.060 07:16:49 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:31.060 07:16:49 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:31.060 07:16:49 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:31.060 07:16:49 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:31.060 07:16:49 -- scripts/common.sh@336 -- # IFS=.-: 00:02:31.060 07:16:49 -- scripts/common.sh@336 -- # read -ra ver1 00:02:31.060 07:16:49 -- scripts/common.sh@337 -- # IFS=.-: 00:02:31.060 07:16:49 -- scripts/common.sh@337 -- # read -ra ver2 00:02:31.060 07:16:49 -- scripts/common.sh@338 -- # local 'op=<' 00:02:31.060 07:16:49 -- scripts/common.sh@340 -- # ver1_l=2 00:02:31.060 07:16:49 -- scripts/common.sh@341 -- # ver2_l=1 00:02:31.060 07:16:49 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:31.060 07:16:49 -- scripts/common.sh@344 -- # case "$op" in 00:02:31.060 07:16:49 -- scripts/common.sh@345 -- # : 1 00:02:31.060 07:16:49 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:31.060 07:16:49 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:31.321 07:16:49 -- scripts/common.sh@365 -- # decimal 1 00:02:31.321 07:16:49 -- scripts/common.sh@353 -- # local d=1 00:02:31.321 07:16:49 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:31.321 07:16:49 -- scripts/common.sh@355 -- # echo 1 00:02:31.321 07:16:49 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:31.321 07:16:49 -- scripts/common.sh@366 -- # decimal 2 00:02:31.321 07:16:49 -- scripts/common.sh@353 -- # local d=2 00:02:31.321 07:16:49 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:31.321 07:16:49 -- scripts/common.sh@355 -- # echo 2 00:02:31.321 07:16:49 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:31.321 07:16:49 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:31.321 07:16:49 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:31.321 07:16:49 -- scripts/common.sh@368 -- # return 0 00:02:31.321 07:16:49 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:31.321 07:16:49 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:31.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.321 --rc genhtml_branch_coverage=1 00:02:31.321 --rc genhtml_function_coverage=1 00:02:31.321 --rc genhtml_legend=1 00:02:31.321 --rc geninfo_all_blocks=1 00:02:31.321 --rc geninfo_unexecuted_blocks=1 00:02:31.321 00:02:31.321 ' 00:02:31.321 07:16:49 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:31.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.321 --rc genhtml_branch_coverage=1 00:02:31.321 --rc genhtml_function_coverage=1 00:02:31.321 --rc genhtml_legend=1 00:02:31.321 --rc geninfo_all_blocks=1 00:02:31.321 --rc geninfo_unexecuted_blocks=1 00:02:31.321 00:02:31.321 ' 00:02:31.321 07:16:49 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:31.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.321 --rc genhtml_branch_coverage=1 00:02:31.321 --rc genhtml_function_coverage=1 00:02:31.321 --rc genhtml_legend=1 00:02:31.321 --rc geninfo_all_blocks=1 00:02:31.321 --rc geninfo_unexecuted_blocks=1 00:02:31.321 00:02:31.321 ' 00:02:31.321 07:16:49 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:31.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.321 --rc genhtml_branch_coverage=1 00:02:31.321 --rc genhtml_function_coverage=1 00:02:31.321 --rc genhtml_legend=1 00:02:31.321 --rc geninfo_all_blocks=1 00:02:31.321 --rc geninfo_unexecuted_blocks=1 00:02:31.321 00:02:31.321 ' 00:02:31.321 07:16:49 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:31.321 07:16:49 -- nvmf/common.sh@7 -- # uname -s 00:02:31.321 07:16:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:31.321 07:16:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:31.321 07:16:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:31.321 07:16:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:31.321 07:16:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:31.321 07:16:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:31.321 07:16:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:31.321 07:16:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:31.321 07:16:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:31.321 07:16:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:31.321 07:16:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:02:31.321 07:16:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:02:31.321 07:16:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:31.322 07:16:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:31.322 07:16:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:31.322 07:16:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:31.322 07:16:49 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:31.322 07:16:49 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:31.322 07:16:49 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:31.322 07:16:49 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:31.322 07:16:49 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:31.322 07:16:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.322 07:16:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.322 07:16:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.322 07:16:49 -- paths/export.sh@5 -- # export PATH 00:02:31.322 07:16:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.322 07:16:49 -- nvmf/common.sh@51 -- # : 0 00:02:31.322 07:16:49 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:31.322 07:16:49 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:31.322 07:16:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:31.322 07:16:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:31.322 07:16:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:31.322 07:16:49 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:31.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:31.322 07:16:49 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:31.322 07:16:49 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:31.322 07:16:49 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:31.322 07:16:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:31.322 07:16:49 -- spdk/autotest.sh@32 -- # uname -s 00:02:31.322 07:16:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:31.322 07:16:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:31.322 07:16:49 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:31.322 07:16:49 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:31.322 07:16:49 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:31.322 07:16:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:31.322 07:16:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:31.322 07:16:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:31.322 07:16:49 -- spdk/autotest.sh@48 -- # udevadm_pid=3134870 00:02:31.322 07:16:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:31.322 07:16:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:31.322 07:16:49 -- pm/common@17 -- # local monitor 00:02:31.322 07:16:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.322 07:16:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.322 07:16:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.322 07:16:49 -- pm/common@21 -- # date +%s 00:02:31.322 07:16:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.322 07:16:49 -- pm/common@21 -- # date +%s 00:02:31.322 07:16:49 -- pm/common@25 -- # sleep 1 00:02:31.322 07:16:49 -- pm/common@21 -- # date +%s 00:02:31.322 07:16:49 -- pm/common@21 -- # date +%s 00:02:31.322 07:16:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732083409 00:02:31.322 07:16:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732083409 00:02:31.322 07:16:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732083409 00:02:31.322 07:16:49 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732083409 00:02:31.322 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732083409_collect-cpu-load.pm.log 00:02:31.322 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732083409_collect-vmstat.pm.log 00:02:31.322 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732083409_collect-cpu-temp.pm.log 00:02:31.322 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732083409_collect-bmc-pm.bmc.pm.log 00:02:32.264 07:16:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:32.264 07:16:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:32.264 07:16:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:32.264 07:16:50 -- common/autotest_common.sh@10 -- # set +x 00:02:32.264 07:16:50 -- spdk/autotest.sh@59 -- # create_test_list 00:02:32.264 07:16:50 -- common/autotest_common.sh@750 -- # xtrace_disable 00:02:32.264 07:16:50 -- common/autotest_common.sh@10 -- # set +x 00:02:32.264 07:16:50 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:32.264 07:16:50 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.264 07:16:50 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.264 07:16:50 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:32.264 07:16:50 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.264 07:16:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:32.264 07:16:50 -- common/autotest_common.sh@1455 -- # uname 00:02:32.264 07:16:50 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:32.264 07:16:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:32.264 07:16:50 -- common/autotest_common.sh@1475 -- # uname 00:02:32.264 07:16:50 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:32.264 07:16:50 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:32.264 07:16:50 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:32.525 lcov: LCOV version 1.15 00:02:32.525 07:16:50 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:47.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:47.452 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:05.629 07:17:20 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:05.629 07:17:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:05.629 07:17:20 -- common/autotest_common.sh@10 -- # set +x 00:03:05.629 07:17:20 -- spdk/autotest.sh@78 -- # rm -f 00:03:05.629 07:17:20 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.200 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:06.200 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:06.200 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:06.200 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:06.200 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:06.200 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:06.200 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:06.200 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:06.200 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:06.200 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:06.200 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:06.462 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:06.462 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:06.462 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:06.462 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:06.462 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:06.462 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:06.723 07:17:24 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:06.723 07:17:24 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:06.723 07:17:24 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:06.723 07:17:24 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:06.723 07:17:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:06.723 07:17:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:06.723 07:17:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:06.723 07:17:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:06.723 07:17:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:06.723 07:17:24 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:06.723 07:17:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:06.723 07:17:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:06.723 07:17:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:06.723 07:17:24 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:06.723 07:17:24 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:06.723 No valid GPT data, bailing 00:03:06.723 07:17:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:06.723 07:17:24 -- scripts/common.sh@394 -- # pt= 00:03:06.723 07:17:24 -- scripts/common.sh@395 -- # return 1 00:03:06.723 07:17:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:06.723 1+0 records in 00:03:06.723 1+0 records out 00:03:06.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509435 s, 206 MB/s 00:03:06.723 07:17:24 -- spdk/autotest.sh@105 -- # sync 00:03:06.723 07:17:24 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:06.723 07:17:24 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:06.723 07:17:24 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:16.727 07:17:33 -- spdk/autotest.sh@111 -- # uname -s 00:03:16.727 07:17:33 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:16.727 07:17:33 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:16.727 07:17:33 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:19.277 Hugepages 00:03:19.277 node hugesize free / total 00:03:19.277 node0 1048576kB 0 / 0 00:03:19.277 node0 2048kB 0 / 0 00:03:19.277 node1 1048576kB 0 / 0 00:03:19.277 node1 2048kB 0 / 0 00:03:19.277 00:03:19.277 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:19.277 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:19.277 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:19.277 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:19.277 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:19.277 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:19.277 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:19.277 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:19.277 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:19.277 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:19.277 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:19.277 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:19.277 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:19.277 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:19.277 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:19.277 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:19.277 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:19.277 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:19.277 07:17:37 -- spdk/autotest.sh@117 -- # uname -s 00:03:19.277 07:17:37 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:19.277 07:17:37 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:19.277 07:17:37 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.582 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:22.582 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:22.582 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:22.582 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:22.582 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:22.582 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:22.582 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:22.843 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:22.843 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:22.843 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:22.843 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:22.843 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:22.843 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:22.843 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:22.843 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:22.843 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:24.758 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:25.019 07:17:43 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:25.962 07:17:44 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:25.962 07:17:44 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:25.962 07:17:44 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:25.962 07:17:44 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:25.962 07:17:44 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:25.962 07:17:44 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:25.962 07:17:44 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:25.962 07:17:44 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:25.962 07:17:44 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:25.962 07:17:44 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:25.962 07:17:44 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:25.962 07:17:44 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.172 Waiting for block devices as requested 00:03:30.172 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:30.172 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:30.172 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:30.172 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:30.172 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:30.172 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:30.172 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:30.172 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:30.173 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:30.434 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:30.434 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:30.434 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:30.695 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:30.695 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:30.695 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:30.956 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:30.956 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:31.221 07:17:49 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:31.221 07:17:49 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:31.221 07:17:49 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:31.221 07:17:49 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:31.221 07:17:49 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:31.221 07:17:49 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:31.221 07:17:49 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:31.221 07:17:49 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:31.221 07:17:49 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:31.221 07:17:49 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:31.221 07:17:49 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:31.221 07:17:49 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:31.221 07:17:49 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:31.221 07:17:49 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:31.221 07:17:49 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:31.221 07:17:49 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:31.221 07:17:49 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:31.221 07:17:49 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:31.221 07:17:49 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:31.221 07:17:49 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:31.221 07:17:49 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:31.221 07:17:49 -- common/autotest_common.sh@1541 -- # continue 00:03:31.221 07:17:49 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:31.221 07:17:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:31.221 07:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:31.482 07:17:49 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:31.482 07:17:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:31.482 07:17:49 -- common/autotest_common.sh@10 -- # set +x 00:03:31.482 07:17:49 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:34.787 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:34.787 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:34.787 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:34.787 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:34.787 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:35.047 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:35.047 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:35.047 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:35.047 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:35.047 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:35.047 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:35.047 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:35.047 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:35.047 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:35.047 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:35.047 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:35.047 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:35.619 07:17:53 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:35.619 07:17:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:35.619 07:17:53 -- common/autotest_common.sh@10 -- # set +x 00:03:35.619 07:17:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:35.619 07:17:53 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:35.619 07:17:53 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:35.619 07:17:53 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:35.619 07:17:53 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:35.619 07:17:53 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:35.619 07:17:53 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:35.619 07:17:53 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:35.619 07:17:53 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:35.619 07:17:53 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:35.619 07:17:53 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:35.619 07:17:53 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:35.619 07:17:53 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:35.619 07:17:53 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:35.619 07:17:53 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:35.619 07:17:53 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:35.619 07:17:53 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:35.619 07:17:53 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:35.619 07:17:53 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:35.619 07:17:53 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:35.619 07:17:53 -- common/autotest_common.sh@1570 -- # return 0 00:03:35.619 07:17:53 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:35.619 07:17:53 -- common/autotest_common.sh@1578 -- # return 0 00:03:35.619 07:17:53 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:35.619 07:17:53 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:35.619 07:17:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:35.619 07:17:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:35.619 07:17:53 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:35.619 07:17:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:35.619 07:17:53 -- common/autotest_common.sh@10 -- # set +x 00:03:35.619 07:17:53 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:35.619 07:17:53 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:35.619 07:17:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:35.619 07:17:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:35.619 07:17:53 -- common/autotest_common.sh@10 -- # set +x 00:03:35.619 ************************************ 00:03:35.619 START TEST env 00:03:35.619 ************************************ 00:03:35.619 07:17:53 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:35.881 * Looking for test storage... 00:03:35.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:35.881 07:17:53 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:35.881 07:17:53 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:35.881 07:17:53 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:35.881 07:17:53 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:35.881 07:17:53 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:35.881 07:17:53 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:35.881 07:17:53 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:35.881 07:17:53 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:35.881 07:17:53 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:35.881 07:17:53 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:35.881 07:17:53 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:35.881 07:17:53 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:35.881 07:17:53 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:35.881 07:17:53 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:35.881 07:17:53 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:35.881 07:17:53 env -- scripts/common.sh@344 -- # case "$op" in 00:03:35.881 07:17:53 env -- scripts/common.sh@345 -- # : 1 00:03:35.881 07:17:53 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:35.881 07:17:53 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:35.881 07:17:53 env -- scripts/common.sh@365 -- # decimal 1 00:03:35.881 07:17:53 env -- scripts/common.sh@353 -- # local d=1 00:03:35.881 07:17:53 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:35.881 07:17:53 env -- scripts/common.sh@355 -- # echo 1 00:03:35.881 07:17:53 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:35.881 07:17:53 env -- scripts/common.sh@366 -- # decimal 2 00:03:35.881 07:17:53 env -- scripts/common.sh@353 -- # local d=2 00:03:35.881 07:17:53 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:35.881 07:17:53 env -- scripts/common.sh@355 -- # echo 2 00:03:35.881 07:17:53 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:35.881 07:17:53 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:35.881 07:17:53 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:35.881 07:17:53 env -- scripts/common.sh@368 -- # return 0 00:03:35.881 07:17:53 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:35.881 07:17:53 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:35.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.881 --rc genhtml_branch_coverage=1 00:03:35.881 --rc genhtml_function_coverage=1 00:03:35.881 --rc genhtml_legend=1 00:03:35.881 --rc geninfo_all_blocks=1 00:03:35.881 --rc geninfo_unexecuted_blocks=1 00:03:35.881 00:03:35.881 ' 00:03:35.881 07:17:53 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:35.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.881 --rc genhtml_branch_coverage=1 00:03:35.881 --rc genhtml_function_coverage=1 00:03:35.881 --rc genhtml_legend=1 00:03:35.881 --rc geninfo_all_blocks=1 00:03:35.881 --rc geninfo_unexecuted_blocks=1 00:03:35.881 00:03:35.881 ' 00:03:35.881 07:17:53 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:35.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.881 --rc genhtml_branch_coverage=1 00:03:35.881 --rc genhtml_function_coverage=1 00:03:35.881 --rc genhtml_legend=1 00:03:35.881 --rc geninfo_all_blocks=1 00:03:35.881 --rc geninfo_unexecuted_blocks=1 00:03:35.881 00:03:35.881 ' 00:03:35.881 07:17:53 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:35.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.881 --rc genhtml_branch_coverage=1 00:03:35.881 --rc genhtml_function_coverage=1 00:03:35.881 --rc genhtml_legend=1 00:03:35.881 --rc geninfo_all_blocks=1 00:03:35.881 --rc geninfo_unexecuted_blocks=1 00:03:35.881 00:03:35.881 ' 00:03:35.881 07:17:53 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:35.881 07:17:53 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:35.881 07:17:53 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:35.881 07:17:53 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.881 ************************************ 00:03:35.881 START TEST env_memory 00:03:35.881 ************************************ 00:03:35.881 07:17:54 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:35.881 00:03:35.881 00:03:35.881 CUnit - A unit testing framework for C - Version 2.1-3 00:03:35.881 http://cunit.sourceforge.net/ 00:03:35.881 00:03:35.881 00:03:35.881 Suite: memory 00:03:35.881 Test: alloc and free memory map ...[2024-11-20 07:17:54.054919] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:35.881 passed 00:03:35.881 Test: mem map translation ...[2024-11-20 07:17:54.080555] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:35.881 [2024-11-20 07:17:54.080583] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:35.881 [2024-11-20 07:17:54.080632] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:35.881 [2024-11-20 07:17:54.080639] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:36.142 passed 00:03:36.142 Test: mem map registration ...[2024-11-20 07:17:54.135892] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:36.142 [2024-11-20 07:17:54.135921] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:36.142 passed 00:03:36.142 Test: mem map adjacent registrations ...passed 00:03:36.142 00:03:36.142 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.142 suites 1 1 n/a 0 0 00:03:36.142 tests 4 4 4 0 0 00:03:36.142 asserts 152 152 152 0 n/a 00:03:36.142 00:03:36.142 Elapsed time = 0.195 seconds 00:03:36.142 00:03:36.143 real 0m0.209s 00:03:36.143 user 0m0.199s 00:03:36.143 sys 0m0.010s 00:03:36.143 07:17:54 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:36.143 07:17:54 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:36.143 ************************************ 00:03:36.143 END TEST env_memory 00:03:36.143 ************************************ 00:03:36.143 07:17:54 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:36.143 07:17:54 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:36.143 07:17:54 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:36.143 07:17:54 env -- common/autotest_common.sh@10 -- # set +x 00:03:36.143 ************************************ 00:03:36.143 START TEST env_vtophys 00:03:36.143 ************************************ 00:03:36.143 07:17:54 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:36.143 EAL: lib.eal log level changed from notice to debug 00:03:36.143 EAL: Detected lcore 0 as core 0 on socket 0 00:03:36.143 EAL: Detected lcore 1 as core 1 on socket 0 00:03:36.143 EAL: Detected lcore 2 as core 2 on socket 0 00:03:36.143 EAL: Detected lcore 3 as core 3 on socket 0 00:03:36.143 EAL: Detected lcore 4 as core 4 on socket 0 00:03:36.143 EAL: Detected lcore 5 as core 5 on socket 0 00:03:36.143 EAL: Detected lcore 6 as core 6 on socket 0 00:03:36.143 EAL: Detected lcore 7 as core 7 on socket 0 00:03:36.143 EAL: Detected lcore 8 as core 8 on socket 0 00:03:36.143 EAL: Detected lcore 9 as core 9 on socket 0 00:03:36.143 EAL: Detected lcore 10 as core 10 on socket 0 00:03:36.143 EAL: Detected lcore 11 as core 11 on socket 0 00:03:36.143 EAL: Detected lcore 12 as core 12 on socket 0 00:03:36.143 EAL: Detected lcore 13 as core 13 on socket 0 00:03:36.143 EAL: Detected lcore 14 as core 14 on socket 0 00:03:36.143 EAL: Detected lcore 15 as core 15 on socket 0 00:03:36.143 EAL: Detected lcore 16 as core 16 on socket 0 00:03:36.143 EAL: Detected lcore 17 as core 17 on socket 0 00:03:36.143 EAL: Detected lcore 18 as core 18 on socket 0 00:03:36.143 EAL: Detected lcore 19 as core 19 on socket 0 00:03:36.143 EAL: Detected lcore 20 as core 20 on socket 0 00:03:36.143 EAL: Detected lcore 21 as core 21 on socket 0 00:03:36.143 EAL: Detected lcore 22 as core 22 on socket 0 00:03:36.143 EAL: Detected lcore 23 as core 23 on socket 0 00:03:36.143 EAL: Detected lcore 24 as core 24 on socket 0 00:03:36.143 EAL: Detected lcore 25 as core 25 on socket 0 00:03:36.143 EAL: Detected lcore 26 as core 26 on socket 0 00:03:36.143 EAL: Detected lcore 27 as core 27 on socket 0 00:03:36.143 EAL: Detected lcore 28 as core 28 on socket 0 00:03:36.143 EAL: Detected lcore 29 as core 29 on socket 0 00:03:36.143 EAL: Detected lcore 30 as core 30 on socket 0 00:03:36.143 EAL: Detected lcore 31 as core 31 on socket 0 00:03:36.143 EAL: Detected lcore 32 as core 32 on socket 0 00:03:36.143 EAL: Detected lcore 33 as core 33 on socket 0 00:03:36.143 EAL: Detected lcore 34 as core 34 on socket 0 00:03:36.143 EAL: Detected lcore 35 as core 35 on socket 0 00:03:36.143 EAL: Detected lcore 36 as core 0 on socket 1 00:03:36.143 EAL: Detected lcore 37 as core 1 on socket 1 00:03:36.143 EAL: Detected lcore 38 as core 2 on socket 1 00:03:36.143 EAL: Detected lcore 39 as core 3 on socket 1 00:03:36.143 EAL: Detected lcore 40 as core 4 on socket 1 00:03:36.143 EAL: Detected lcore 41 as core 5 on socket 1 00:03:36.143 EAL: Detected lcore 42 as core 6 on socket 1 00:03:36.143 EAL: Detected lcore 43 as core 7 on socket 1 00:03:36.143 EAL: Detected lcore 44 as core 8 on socket 1 00:03:36.143 EAL: Detected lcore 45 as core 9 on socket 1 00:03:36.143 EAL: Detected lcore 46 as core 10 on socket 1 00:03:36.143 EAL: Detected lcore 47 as core 11 on socket 1 00:03:36.143 EAL: Detected lcore 48 as core 12 on socket 1 00:03:36.143 EAL: Detected lcore 49 as core 13 on socket 1 00:03:36.143 EAL: Detected lcore 50 as core 14 on socket 1 00:03:36.143 EAL: Detected lcore 51 as core 15 on socket 1 00:03:36.143 EAL: Detected lcore 52 as core 16 on socket 1 00:03:36.143 EAL: Detected lcore 53 as core 17 on socket 1 00:03:36.143 EAL: Detected lcore 54 as core 18 on socket 1 00:03:36.143 EAL: Detected lcore 55 as core 19 on socket 1 00:03:36.143 EAL: Detected lcore 56 as core 20 on socket 1 00:03:36.143 EAL: Detected lcore 57 as core 21 on socket 1 00:03:36.143 EAL: Detected lcore 58 as core 22 on socket 1 00:03:36.143 EAL: Detected lcore 59 as core 23 on socket 1 00:03:36.143 EAL: Detected lcore 60 as core 24 on socket 1 00:03:36.143 EAL: Detected lcore 61 as core 25 on socket 1 00:03:36.143 EAL: Detected lcore 62 as core 26 on socket 1 00:03:36.143 EAL: Detected lcore 63 as core 27 on socket 1 00:03:36.143 EAL: Detected lcore 64 as core 28 on socket 1 00:03:36.143 EAL: Detected lcore 65 as core 29 on socket 1 00:03:36.143 EAL: Detected lcore 66 as core 30 on socket 1 00:03:36.143 EAL: Detected lcore 67 as core 31 on socket 1 00:03:36.143 EAL: Detected lcore 68 as core 32 on socket 1 00:03:36.143 EAL: Detected lcore 69 as core 33 on socket 1 00:03:36.143 EAL: Detected lcore 70 as core 34 on socket 1 00:03:36.143 EAL: Detected lcore 71 as core 35 on socket 1 00:03:36.143 EAL: Detected lcore 72 as core 0 on socket 0 00:03:36.143 EAL: Detected lcore 73 as core 1 on socket 0 00:03:36.143 EAL: Detected lcore 74 as core 2 on socket 0 00:03:36.143 EAL: Detected lcore 75 as core 3 on socket 0 00:03:36.143 EAL: Detected lcore 76 as core 4 on socket 0 00:03:36.143 EAL: Detected lcore 77 as core 5 on socket 0 00:03:36.143 EAL: Detected lcore 78 as core 6 on socket 0 00:03:36.143 EAL: Detected lcore 79 as core 7 on socket 0 00:03:36.143 EAL: Detected lcore 80 as core 8 on socket 0 00:03:36.143 EAL: Detected lcore 81 as core 9 on socket 0 00:03:36.143 EAL: Detected lcore 82 as core 10 on socket 0 00:03:36.143 EAL: Detected lcore 83 as core 11 on socket 0 00:03:36.143 EAL: Detected lcore 84 as core 12 on socket 0 00:03:36.143 EAL: Detected lcore 85 as core 13 on socket 0 00:03:36.143 EAL: Detected lcore 86 as core 14 on socket 0 00:03:36.143 EAL: Detected lcore 87 as core 15 on socket 0 00:03:36.143 EAL: Detected lcore 88 as core 16 on socket 0 00:03:36.143 EAL: Detected lcore 89 as core 17 on socket 0 00:03:36.143 EAL: Detected lcore 90 as core 18 on socket 0 00:03:36.143 EAL: Detected lcore 91 as core 19 on socket 0 00:03:36.143 EAL: Detected lcore 92 as core 20 on socket 0 00:03:36.143 EAL: Detected lcore 93 as core 21 on socket 0 00:03:36.143 EAL: Detected lcore 94 as core 22 on socket 0 00:03:36.143 EAL: Detected lcore 95 as core 23 on socket 0 00:03:36.143 EAL: Detected lcore 96 as core 24 on socket 0 00:03:36.143 EAL: Detected lcore 97 as core 25 on socket 0 00:03:36.143 EAL: Detected lcore 98 as core 26 on socket 0 00:03:36.143 EAL: Detected lcore 99 as core 27 on socket 0 00:03:36.143 EAL: Detected lcore 100 as core 28 on socket 0 00:03:36.143 EAL: Detected lcore 101 as core 29 on socket 0 00:03:36.143 EAL: Detected lcore 102 as core 30 on socket 0 00:03:36.143 EAL: Detected lcore 103 as core 31 on socket 0 00:03:36.143 EAL: Detected lcore 104 as core 32 on socket 0 00:03:36.143 EAL: Detected lcore 105 as core 33 on socket 0 00:03:36.143 EAL: Detected lcore 106 as core 34 on socket 0 00:03:36.143 EAL: Detected lcore 107 as core 35 on socket 0 00:03:36.143 EAL: Detected lcore 108 as core 0 on socket 1 00:03:36.143 EAL: Detected lcore 109 as core 1 on socket 1 00:03:36.143 EAL: Detected lcore 110 as core 2 on socket 1 00:03:36.143 EAL: Detected lcore 111 as core 3 on socket 1 00:03:36.143 EAL: Detected lcore 112 as core 4 on socket 1 00:03:36.143 EAL: Detected lcore 113 as core 5 on socket 1 00:03:36.143 EAL: Detected lcore 114 as core 6 on socket 1 00:03:36.143 EAL: Detected lcore 115 as core 7 on socket 1 00:03:36.143 EAL: Detected lcore 116 as core 8 on socket 1 00:03:36.143 EAL: Detected lcore 117 as core 9 on socket 1 00:03:36.143 EAL: Detected lcore 118 as core 10 on socket 1 00:03:36.143 EAL: Detected lcore 119 as core 11 on socket 1 00:03:36.143 EAL: Detected lcore 120 as core 12 on socket 1 00:03:36.143 EAL: Detected lcore 121 as core 13 on socket 1 00:03:36.143 EAL: Detected lcore 122 as core 14 on socket 1 00:03:36.143 EAL: Detected lcore 123 as core 15 on socket 1 00:03:36.143 EAL: Detected lcore 124 as core 16 on socket 1 00:03:36.143 EAL: Detected lcore 125 as core 17 on socket 1 00:03:36.143 EAL: Detected lcore 126 as core 18 on socket 1 00:03:36.143 EAL: Detected lcore 127 as core 19 on socket 1 00:03:36.143 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:36.143 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:36.143 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:36.143 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:36.143 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:36.143 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:36.143 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:36.143 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:36.143 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:36.143 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:36.143 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:36.143 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:36.143 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:36.143 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:36.143 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:36.143 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:36.143 EAL: Maximum logical cores by configuration: 128 00:03:36.143 EAL: Detected CPU lcores: 128 00:03:36.143 EAL: Detected NUMA nodes: 2 00:03:36.143 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:36.143 EAL: Detected shared linkage of DPDK 00:03:36.143 EAL: No shared files mode enabled, IPC will be disabled 00:03:36.143 EAL: Bus pci wants IOVA as 'DC' 00:03:36.143 EAL: Buses did not request a specific IOVA mode. 00:03:36.405 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:36.405 EAL: Selected IOVA mode 'VA' 00:03:36.405 EAL: Probing VFIO support... 00:03:36.405 EAL: IOMMU type 1 (Type 1) is supported 00:03:36.405 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:36.405 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:36.405 EAL: VFIO support initialized 00:03:36.405 EAL: Ask a virtual area of 0x2e000 bytes 00:03:36.405 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:36.405 EAL: Setting up physically contiguous memory... 00:03:36.405 EAL: Setting maximum number of open files to 524288 00:03:36.405 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:36.405 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:36.405 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:36.405 EAL: Ask a virtual area of 0x61000 bytes 00:03:36.405 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:36.405 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:36.405 EAL: Ask a virtual area of 0x400000000 bytes 00:03:36.405 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:36.405 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:36.405 EAL: Ask a virtual area of 0x61000 bytes 00:03:36.405 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:36.405 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:36.405 EAL: Ask a virtual area of 0x400000000 bytes 00:03:36.405 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:36.405 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:36.405 EAL: Ask a virtual area of 0x61000 bytes 00:03:36.405 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:36.405 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:36.405 EAL: Ask a virtual area of 0x400000000 bytes 00:03:36.405 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:36.405 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:36.405 EAL: Ask a virtual area of 0x61000 bytes 00:03:36.405 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:36.405 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:36.405 EAL: Ask a virtual area of 0x400000000 bytes 00:03:36.405 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:36.405 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:36.405 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:36.405 EAL: Ask a virtual area of 0x61000 bytes 00:03:36.405 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:36.405 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:36.405 EAL: Ask a virtual area of 0x400000000 bytes 00:03:36.405 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:36.405 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:36.405 EAL: Ask a virtual area of 0x61000 bytes 00:03:36.405 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:36.405 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:36.405 EAL: Ask a virtual area of 0x400000000 bytes 00:03:36.405 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:36.405 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:36.405 EAL: Ask a virtual area of 0x61000 bytes 00:03:36.405 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:36.405 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:36.405 EAL: Ask a virtual area of 0x400000000 bytes 00:03:36.405 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:36.405 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:36.405 EAL: Ask a virtual area of 0x61000 bytes 00:03:36.405 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:36.405 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:36.405 EAL: Ask a virtual area of 0x400000000 bytes 00:03:36.405 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:36.405 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:36.405 EAL: Hugepages will be freed exactly as allocated. 00:03:36.405 EAL: No shared files mode enabled, IPC is disabled 00:03:36.405 EAL: No shared files mode enabled, IPC is disabled 00:03:36.405 EAL: TSC frequency is ~2400000 KHz 00:03:36.405 EAL: Main lcore 0 is ready (tid=7fa7b859da00;cpuset=[0]) 00:03:36.405 EAL: Trying to obtain current memory policy. 00:03:36.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.405 EAL: Restoring previous memory policy: 0 00:03:36.405 EAL: request: mp_malloc_sync 00:03:36.405 EAL: No shared files mode enabled, IPC is disabled 00:03:36.405 EAL: Heap on socket 0 was expanded by 2MB 00:03:36.405 EAL: No shared files mode enabled, IPC is disabled 00:03:36.405 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:36.405 EAL: Mem event callback 'spdk:(nil)' registered 00:03:36.405 00:03:36.405 00:03:36.405 CUnit - A unit testing framework for C - Version 2.1-3 00:03:36.405 http://cunit.sourceforge.net/ 00:03:36.405 00:03:36.405 00:03:36.405 Suite: components_suite 00:03:36.405 Test: vtophys_malloc_test ...passed 00:03:36.405 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:36.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.405 EAL: Restoring previous memory policy: 4 00:03:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.405 EAL: request: mp_malloc_sync 00:03:36.405 EAL: No shared files mode enabled, IPC is disabled 00:03:36.405 EAL: Heap on socket 0 was expanded by 4MB 00:03:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.405 EAL: request: mp_malloc_sync 00:03:36.405 EAL: No shared files mode enabled, IPC is disabled 00:03:36.405 EAL: Heap on socket 0 was shrunk by 4MB 00:03:36.405 EAL: Trying to obtain current memory policy. 00:03:36.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.405 EAL: Restoring previous memory policy: 4 00:03:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.405 EAL: request: mp_malloc_sync 00:03:36.405 EAL: No shared files mode enabled, IPC is disabled 00:03:36.405 EAL: Heap on socket 0 was expanded by 6MB 00:03:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.405 EAL: request: mp_malloc_sync 00:03:36.405 EAL: No shared files mode enabled, IPC is disabled 00:03:36.405 EAL: Heap on socket 0 was shrunk by 6MB 00:03:36.405 EAL: Trying to obtain current memory policy. 00:03:36.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.405 EAL: Restoring previous memory policy: 4 00:03:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.405 EAL: request: mp_malloc_sync 00:03:36.405 EAL: No shared files mode enabled, IPC is disabled 00:03:36.405 EAL: Heap on socket 0 was expanded by 10MB 00:03:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.405 EAL: request: mp_malloc_sync 00:03:36.405 EAL: No shared files mode enabled, IPC is disabled 00:03:36.405 EAL: Heap on socket 0 was shrunk by 10MB 00:03:36.405 EAL: Trying to obtain current memory policy. 00:03:36.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.405 EAL: Restoring previous memory policy: 4 00:03:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.405 EAL: request: mp_malloc_sync 00:03:36.405 EAL: No shared files mode enabled, IPC is disabled 00:03:36.405 EAL: Heap on socket 0 was expanded by 18MB 00:03:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.405 EAL: request: mp_malloc_sync 00:03:36.405 EAL: No shared files mode enabled, IPC is disabled 00:03:36.405 EAL: Heap on socket 0 was shrunk by 18MB 00:03:36.405 EAL: Trying to obtain current memory policy. 00:03:36.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.405 EAL: Restoring previous memory policy: 4 00:03:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.405 EAL: request: mp_malloc_sync 00:03:36.405 EAL: No shared files mode enabled, IPC is disabled 00:03:36.405 EAL: Heap on socket 0 was expanded by 34MB 00:03:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.405 EAL: request: mp_malloc_sync 00:03:36.405 EAL: No shared files mode enabled, IPC is disabled 00:03:36.405 EAL: Heap on socket 0 was shrunk by 34MB 00:03:36.405 EAL: Trying to obtain current memory policy. 00:03:36.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.405 EAL: Restoring previous memory policy: 4 00:03:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.405 EAL: request: mp_malloc_sync 00:03:36.405 EAL: No shared files mode enabled, IPC is disabled 00:03:36.405 EAL: Heap on socket 0 was expanded by 66MB 00:03:36.405 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.406 EAL: request: mp_malloc_sync 00:03:36.406 EAL: No shared files mode enabled, IPC is disabled 00:03:36.406 EAL: Heap on socket 0 was shrunk by 66MB 00:03:36.406 EAL: Trying to obtain current memory policy. 00:03:36.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.406 EAL: Restoring previous memory policy: 4 00:03:36.406 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.406 EAL: request: mp_malloc_sync 00:03:36.406 EAL: No shared files mode enabled, IPC is disabled 00:03:36.406 EAL: Heap on socket 0 was expanded by 130MB 00:03:36.406 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.406 EAL: request: mp_malloc_sync 00:03:36.406 EAL: No shared files mode enabled, IPC is disabled 00:03:36.406 EAL: Heap on socket 0 was shrunk by 130MB 00:03:36.406 EAL: Trying to obtain current memory policy. 00:03:36.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.406 EAL: Restoring previous memory policy: 4 00:03:36.406 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.406 EAL: request: mp_malloc_sync 00:03:36.406 EAL: No shared files mode enabled, IPC is disabled 00:03:36.406 EAL: Heap on socket 0 was expanded by 258MB 00:03:36.406 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.406 EAL: request: mp_malloc_sync 00:03:36.406 EAL: No shared files mode enabled, IPC is disabled 00:03:36.406 EAL: Heap on socket 0 was shrunk by 258MB 00:03:36.406 EAL: Trying to obtain current memory policy. 00:03:36.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.667 EAL: Restoring previous memory policy: 4 00:03:36.667 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.667 EAL: request: mp_malloc_sync 00:03:36.667 EAL: No shared files mode enabled, IPC is disabled 00:03:36.667 EAL: Heap on socket 0 was expanded by 514MB 00:03:36.667 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.667 EAL: request: mp_malloc_sync 00:03:36.667 EAL: No shared files mode enabled, IPC is disabled 00:03:36.667 EAL: Heap on socket 0 was shrunk by 514MB 00:03:36.667 EAL: Trying to obtain current memory policy. 00:03:36.667 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.928 EAL: Restoring previous memory policy: 4 00:03:36.928 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.928 EAL: request: mp_malloc_sync 00:03:36.928 EAL: No shared files mode enabled, IPC is disabled 00:03:36.928 EAL: Heap on socket 0 was expanded by 1026MB 00:03:36.928 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.928 EAL: request: mp_malloc_sync 00:03:36.928 EAL: No shared files mode enabled, IPC is disabled 00:03:36.928 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:36.928 passed 00:03:36.928 00:03:36.928 Run Summary: Type Total Ran Passed Failed Inactive 00:03:36.928 suites 1 1 n/a 0 0 00:03:36.928 tests 2 2 2 0 0 00:03:36.928 asserts 497 497 497 0 n/a 00:03:36.928 00:03:36.928 Elapsed time = 0.691 seconds 00:03:36.928 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.928 EAL: request: mp_malloc_sync 00:03:36.928 EAL: No shared files mode enabled, IPC is disabled 00:03:36.928 EAL: Heap on socket 0 was shrunk by 2MB 00:03:36.928 EAL: No shared files mode enabled, IPC is disabled 00:03:36.928 EAL: No shared files mode enabled, IPC is disabled 00:03:36.928 EAL: No shared files mode enabled, IPC is disabled 00:03:36.928 00:03:36.928 real 0m0.839s 00:03:36.928 user 0m0.455s 00:03:36.928 sys 0m0.359s 00:03:36.928 07:17:55 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:36.928 07:17:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:36.928 ************************************ 00:03:36.928 END TEST env_vtophys 00:03:36.928 ************************************ 00:03:37.189 07:17:55 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:37.189 07:17:55 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:37.189 07:17:55 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:37.189 07:17:55 env -- common/autotest_common.sh@10 -- # set +x 00:03:37.189 ************************************ 00:03:37.189 START TEST env_pci 00:03:37.189 ************************************ 00:03:37.189 07:17:55 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:37.189 00:03:37.189 00:03:37.189 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.189 http://cunit.sourceforge.net/ 00:03:37.189 00:03:37.189 00:03:37.189 Suite: pci 00:03:37.189 Test: pci_hook ...[2024-11-20 07:17:55.231313] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3154378 has claimed it 00:03:37.189 EAL: Cannot find device (10000:00:01.0) 00:03:37.189 EAL: Failed to attach device on primary process 00:03:37.189 passed 00:03:37.189 00:03:37.189 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.189 suites 1 1 n/a 0 0 00:03:37.189 tests 1 1 1 0 0 00:03:37.189 asserts 25 25 25 0 n/a 00:03:37.189 00:03:37.189 Elapsed time = 0.031 seconds 00:03:37.189 00:03:37.189 real 0m0.052s 00:03:37.189 user 0m0.018s 00:03:37.189 sys 0m0.033s 00:03:37.189 07:17:55 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:37.189 07:17:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:37.189 ************************************ 00:03:37.189 END TEST env_pci 00:03:37.189 ************************************ 00:03:37.189 07:17:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:37.189 07:17:55 env -- env/env.sh@15 -- # uname 00:03:37.189 07:17:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:37.189 07:17:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:37.189 07:17:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:37.189 07:17:55 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:03:37.189 07:17:55 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:37.189 07:17:55 env -- common/autotest_common.sh@10 -- # set +x 00:03:37.189 ************************************ 00:03:37.189 START TEST env_dpdk_post_init 00:03:37.189 ************************************ 00:03:37.189 07:17:55 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:37.189 EAL: Detected CPU lcores: 128 00:03:37.189 EAL: Detected NUMA nodes: 2 00:03:37.189 EAL: Detected shared linkage of DPDK 00:03:37.189 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:37.450 EAL: Selected IOVA mode 'VA' 00:03:37.450 EAL: VFIO support initialized 00:03:37.450 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:37.450 EAL: Using IOMMU type 1 (Type 1) 00:03:37.450 EAL: Ignore mapping IO port bar(1) 00:03:37.710 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:37.710 EAL: Ignore mapping IO port bar(1) 00:03:37.972 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:37.972 EAL: Ignore mapping IO port bar(1) 00:03:37.972 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:38.233 EAL: Ignore mapping IO port bar(1) 00:03:38.233 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:38.494 EAL: Ignore mapping IO port bar(1) 00:03:38.494 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:38.755 EAL: Ignore mapping IO port bar(1) 00:03:38.755 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:39.017 EAL: Ignore mapping IO port bar(1) 00:03:39.017 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:39.017 EAL: Ignore mapping IO port bar(1) 00:03:39.278 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:39.539 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:39.539 EAL: Ignore mapping IO port bar(1) 00:03:39.539 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:39.799 EAL: Ignore mapping IO port bar(1) 00:03:39.799 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:40.062 EAL: Ignore mapping IO port bar(1) 00:03:40.062 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:40.323 EAL: Ignore mapping IO port bar(1) 00:03:40.323 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:40.584 EAL: Ignore mapping IO port bar(1) 00:03:40.584 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:40.584 EAL: Ignore mapping IO port bar(1) 00:03:40.846 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:40.846 EAL: Ignore mapping IO port bar(1) 00:03:41.106 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:41.106 EAL: Ignore mapping IO port bar(1) 00:03:41.106 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:41.368 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:41.368 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:41.368 Starting DPDK initialization... 00:03:41.368 Starting SPDK post initialization... 00:03:41.368 SPDK NVMe probe 00:03:41.368 Attaching to 0000:65:00.0 00:03:41.368 Attached to 0000:65:00.0 00:03:41.368 Cleaning up... 00:03:43.284 00:03:43.284 real 0m5.748s 00:03:43.284 user 0m0.094s 00:03:43.284 sys 0m0.213s 00:03:43.284 07:18:01 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:43.284 07:18:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:43.284 ************************************ 00:03:43.284 END TEST env_dpdk_post_init 00:03:43.284 ************************************ 00:03:43.284 07:18:01 env -- env/env.sh@26 -- # uname 00:03:43.284 07:18:01 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:43.284 07:18:01 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:43.284 07:18:01 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:43.284 07:18:01 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:43.284 07:18:01 env -- common/autotest_common.sh@10 -- # set +x 00:03:43.284 ************************************ 00:03:43.284 START TEST env_mem_callbacks 00:03:43.284 ************************************ 00:03:43.284 07:18:01 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:43.284 EAL: Detected CPU lcores: 128 00:03:43.284 EAL: Detected NUMA nodes: 2 00:03:43.284 EAL: Detected shared linkage of DPDK 00:03:43.284 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:43.284 EAL: Selected IOVA mode 'VA' 00:03:43.284 EAL: VFIO support initialized 00:03:43.284 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:43.284 00:03:43.284 00:03:43.284 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.284 http://cunit.sourceforge.net/ 00:03:43.284 00:03:43.284 00:03:43.284 Suite: memory 00:03:43.284 Test: test ... 00:03:43.284 register 0x200000200000 2097152 00:03:43.284 malloc 3145728 00:03:43.284 register 0x200000400000 4194304 00:03:43.284 buf 0x200000500000 len 3145728 PASSED 00:03:43.284 malloc 64 00:03:43.284 buf 0x2000004fff40 len 64 PASSED 00:03:43.284 malloc 4194304 00:03:43.284 register 0x200000800000 6291456 00:03:43.284 buf 0x200000a00000 len 4194304 PASSED 00:03:43.284 free 0x200000500000 3145728 00:03:43.284 free 0x2000004fff40 64 00:03:43.284 unregister 0x200000400000 4194304 PASSED 00:03:43.284 free 0x200000a00000 4194304 00:03:43.284 unregister 0x200000800000 6291456 PASSED 00:03:43.284 malloc 8388608 00:03:43.284 register 0x200000400000 10485760 00:03:43.284 buf 0x200000600000 len 8388608 PASSED 00:03:43.284 free 0x200000600000 8388608 00:03:43.284 unregister 0x200000400000 10485760 PASSED 00:03:43.284 passed 00:03:43.284 00:03:43.284 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.284 suites 1 1 n/a 0 0 00:03:43.284 tests 1 1 1 0 0 00:03:43.284 asserts 15 15 15 0 n/a 00:03:43.284 00:03:43.284 Elapsed time = 0.010 seconds 00:03:43.284 00:03:43.284 real 0m0.069s 00:03:43.284 user 0m0.023s 00:03:43.284 sys 0m0.047s 00:03:43.284 07:18:01 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:43.284 07:18:01 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:43.284 ************************************ 00:03:43.284 END TEST env_mem_callbacks 00:03:43.284 ************************************ 00:03:43.284 00:03:43.284 real 0m7.540s 00:03:43.284 user 0m1.036s 00:03:43.284 sys 0m1.072s 00:03:43.284 07:18:01 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:43.284 07:18:01 env -- common/autotest_common.sh@10 -- # set +x 00:03:43.284 ************************************ 00:03:43.284 END TEST env 00:03:43.284 ************************************ 00:03:43.284 07:18:01 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:43.284 07:18:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:43.284 07:18:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:43.284 07:18:01 -- common/autotest_common.sh@10 -- # set +x 00:03:43.284 ************************************ 00:03:43.284 START TEST rpc 00:03:43.284 ************************************ 00:03:43.284 07:18:01 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:43.284 * Looking for test storage... 00:03:43.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:43.546 07:18:01 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:43.546 07:18:01 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:43.546 07:18:01 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:43.546 07:18:01 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:43.546 07:18:01 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:43.546 07:18:01 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:43.546 07:18:01 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:43.546 07:18:01 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.546 07:18:01 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:43.546 07:18:01 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:43.546 07:18:01 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:43.546 07:18:01 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:43.546 07:18:01 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:43.546 07:18:01 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:43.546 07:18:01 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:43.546 07:18:01 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:43.546 07:18:01 rpc -- scripts/common.sh@345 -- # : 1 00:03:43.546 07:18:01 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:43.546 07:18:01 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.546 07:18:01 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:43.546 07:18:01 rpc -- scripts/common.sh@353 -- # local d=1 00:03:43.546 07:18:01 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.546 07:18:01 rpc -- scripts/common.sh@355 -- # echo 1 00:03:43.546 07:18:01 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:43.546 07:18:01 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:43.546 07:18:01 rpc -- scripts/common.sh@353 -- # local d=2 00:03:43.546 07:18:01 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.546 07:18:01 rpc -- scripts/common.sh@355 -- # echo 2 00:03:43.546 07:18:01 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:43.546 07:18:01 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:43.546 07:18:01 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:43.546 07:18:01 rpc -- scripts/common.sh@368 -- # return 0 00:03:43.546 07:18:01 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.546 07:18:01 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:43.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.546 --rc genhtml_branch_coverage=1 00:03:43.546 --rc genhtml_function_coverage=1 00:03:43.546 --rc genhtml_legend=1 00:03:43.546 --rc geninfo_all_blocks=1 00:03:43.546 --rc geninfo_unexecuted_blocks=1 00:03:43.546 00:03:43.546 ' 00:03:43.546 07:18:01 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:43.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.546 --rc genhtml_branch_coverage=1 00:03:43.546 --rc genhtml_function_coverage=1 00:03:43.546 --rc genhtml_legend=1 00:03:43.546 --rc geninfo_all_blocks=1 00:03:43.546 --rc geninfo_unexecuted_blocks=1 00:03:43.546 00:03:43.546 ' 00:03:43.546 07:18:01 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:43.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.546 --rc genhtml_branch_coverage=1 00:03:43.546 --rc genhtml_function_coverage=1 00:03:43.546 --rc genhtml_legend=1 00:03:43.546 --rc geninfo_all_blocks=1 00:03:43.546 --rc geninfo_unexecuted_blocks=1 00:03:43.546 00:03:43.546 ' 00:03:43.546 07:18:01 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:43.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.546 --rc genhtml_branch_coverage=1 00:03:43.546 --rc genhtml_function_coverage=1 00:03:43.546 --rc genhtml_legend=1 00:03:43.546 --rc geninfo_all_blocks=1 00:03:43.546 --rc geninfo_unexecuted_blocks=1 00:03:43.546 00:03:43.546 ' 00:03:43.546 07:18:01 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3155736 00:03:43.546 07:18:01 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:43.546 07:18:01 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3155736 00:03:43.546 07:18:01 rpc -- common/autotest_common.sh@833 -- # '[' -z 3155736 ']' 00:03:43.546 07:18:01 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:43.546 07:18:01 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:43.546 07:18:01 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:43.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:43.546 07:18:01 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:43.546 07:18:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.546 07:18:01 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:43.546 [2024-11-20 07:18:01.649406] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:03:43.546 [2024-11-20 07:18:01.649478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155736 ] 00:03:43.546 [2024-11-20 07:18:01.741410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.808 [2024-11-20 07:18:01.794412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:43.808 [2024-11-20 07:18:01.794466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3155736' to capture a snapshot of events at runtime. 00:03:43.808 [2024-11-20 07:18:01.794475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:43.808 [2024-11-20 07:18:01.794483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:43.808 [2024-11-20 07:18:01.794489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3155736 for offline analysis/debug. 00:03:43.808 [2024-11-20 07:18:01.795315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:44.380 07:18:02 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:44.380 07:18:02 rpc -- common/autotest_common.sh@866 -- # return 0 00:03:44.380 07:18:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:44.380 07:18:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:44.380 07:18:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:44.380 07:18:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:44.380 07:18:02 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:44.380 07:18:02 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:44.380 07:18:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:44.380 ************************************ 00:03:44.380 START TEST rpc_integrity 00:03:44.380 ************************************ 00:03:44.380 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:44.380 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:44.380 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.380 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.380 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.380 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:44.380 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:44.380 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:44.380 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:44.380 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.380 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.380 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.380 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:44.380 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:44.380 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.642 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:44.642 { 00:03:44.642 "name": "Malloc0", 00:03:44.642 "aliases": [ 00:03:44.642 "9b586bbf-097a-450a-b9e0-e357bf969bd4" 00:03:44.642 ], 00:03:44.642 "product_name": "Malloc disk", 00:03:44.642 "block_size": 512, 00:03:44.642 "num_blocks": 16384, 00:03:44.642 "uuid": "9b586bbf-097a-450a-b9e0-e357bf969bd4", 00:03:44.642 "assigned_rate_limits": { 00:03:44.642 "rw_ios_per_sec": 0, 00:03:44.642 "rw_mbytes_per_sec": 0, 00:03:44.642 "r_mbytes_per_sec": 0, 00:03:44.642 "w_mbytes_per_sec": 0 00:03:44.642 }, 00:03:44.642 "claimed": false, 00:03:44.642 "zoned": false, 00:03:44.642 "supported_io_types": { 00:03:44.642 "read": true, 00:03:44.642 "write": true, 00:03:44.642 "unmap": true, 00:03:44.642 "flush": true, 00:03:44.642 "reset": true, 00:03:44.642 "nvme_admin": false, 00:03:44.642 "nvme_io": false, 00:03:44.642 "nvme_io_md": false, 00:03:44.642 "write_zeroes": true, 00:03:44.642 "zcopy": true, 00:03:44.642 "get_zone_info": false, 00:03:44.642 "zone_management": false, 00:03:44.642 "zone_append": false, 00:03:44.642 "compare": false, 00:03:44.642 "compare_and_write": false, 00:03:44.642 "abort": true, 00:03:44.642 "seek_hole": false, 00:03:44.642 "seek_data": false, 00:03:44.642 "copy": true, 00:03:44.642 "nvme_iov_md": false 00:03:44.642 }, 00:03:44.642 "memory_domains": [ 00:03:44.642 { 00:03:44.642 "dma_device_id": "system", 00:03:44.642 "dma_device_type": 1 00:03:44.642 }, 00:03:44.642 { 00:03:44.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:44.642 "dma_device_type": 2 00:03:44.642 } 00:03:44.642 ], 00:03:44.642 "driver_specific": {} 00:03:44.642 } 00:03:44.642 ]' 00:03:44.642 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:44.642 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:44.642 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.642 [2024-11-20 07:18:02.656678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:44.642 [2024-11-20 07:18:02.656727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:44.642 [2024-11-20 07:18:02.656743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d56800 00:03:44.642 [2024-11-20 07:18:02.656757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:44.642 [2024-11-20 07:18:02.658305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:44.642 [2024-11-20 07:18:02.658341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:44.642 Passthru0 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.642 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.642 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:44.642 { 00:03:44.642 "name": "Malloc0", 00:03:44.642 "aliases": [ 00:03:44.642 "9b586bbf-097a-450a-b9e0-e357bf969bd4" 00:03:44.642 ], 00:03:44.642 "product_name": "Malloc disk", 00:03:44.642 "block_size": 512, 00:03:44.642 "num_blocks": 16384, 00:03:44.642 "uuid": "9b586bbf-097a-450a-b9e0-e357bf969bd4", 00:03:44.642 "assigned_rate_limits": { 00:03:44.642 "rw_ios_per_sec": 0, 00:03:44.642 "rw_mbytes_per_sec": 0, 00:03:44.642 "r_mbytes_per_sec": 0, 00:03:44.642 "w_mbytes_per_sec": 0 00:03:44.642 }, 00:03:44.642 "claimed": true, 00:03:44.642 "claim_type": "exclusive_write", 00:03:44.642 "zoned": false, 00:03:44.642 "supported_io_types": { 00:03:44.642 "read": true, 00:03:44.642 "write": true, 00:03:44.642 "unmap": true, 00:03:44.642 "flush": true, 00:03:44.642 "reset": true, 00:03:44.642 "nvme_admin": false, 00:03:44.642 "nvme_io": false, 00:03:44.642 "nvme_io_md": false, 00:03:44.642 "write_zeroes": true, 00:03:44.642 "zcopy": true, 00:03:44.642 "get_zone_info": false, 00:03:44.642 "zone_management": false, 00:03:44.642 "zone_append": false, 00:03:44.642 "compare": false, 00:03:44.642 "compare_and_write": false, 00:03:44.642 "abort": true, 00:03:44.642 "seek_hole": false, 00:03:44.642 "seek_data": false, 00:03:44.642 "copy": true, 00:03:44.642 "nvme_iov_md": false 00:03:44.642 }, 00:03:44.642 "memory_domains": [ 00:03:44.642 { 00:03:44.642 "dma_device_id": "system", 00:03:44.642 "dma_device_type": 1 00:03:44.642 }, 00:03:44.642 { 00:03:44.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:44.642 "dma_device_type": 2 00:03:44.642 } 00:03:44.642 ], 00:03:44.642 "driver_specific": {} 00:03:44.642 }, 00:03:44.642 { 00:03:44.642 "name": "Passthru0", 00:03:44.642 "aliases": [ 00:03:44.642 "9b08feb6-8453-57dc-b9db-476a7c9cdc0e" 00:03:44.642 ], 00:03:44.642 "product_name": "passthru", 00:03:44.642 "block_size": 512, 00:03:44.642 "num_blocks": 16384, 00:03:44.642 "uuid": "9b08feb6-8453-57dc-b9db-476a7c9cdc0e", 00:03:44.642 "assigned_rate_limits": { 00:03:44.642 "rw_ios_per_sec": 0, 00:03:44.642 "rw_mbytes_per_sec": 0, 00:03:44.642 "r_mbytes_per_sec": 0, 00:03:44.642 "w_mbytes_per_sec": 0 00:03:44.642 }, 00:03:44.642 "claimed": false, 00:03:44.642 "zoned": false, 00:03:44.642 "supported_io_types": { 00:03:44.642 "read": true, 00:03:44.642 "write": true, 00:03:44.642 "unmap": true, 00:03:44.642 "flush": true, 00:03:44.642 "reset": true, 00:03:44.642 "nvme_admin": false, 00:03:44.642 "nvme_io": false, 00:03:44.642 "nvme_io_md": false, 00:03:44.642 "write_zeroes": true, 00:03:44.642 "zcopy": true, 00:03:44.642 "get_zone_info": false, 00:03:44.642 "zone_management": false, 00:03:44.642 "zone_append": false, 00:03:44.642 "compare": false, 00:03:44.642 "compare_and_write": false, 00:03:44.642 "abort": true, 00:03:44.642 "seek_hole": false, 00:03:44.642 "seek_data": false, 00:03:44.642 "copy": true, 00:03:44.642 "nvme_iov_md": false 00:03:44.642 }, 00:03:44.642 "memory_domains": [ 00:03:44.642 { 00:03:44.642 "dma_device_id": "system", 00:03:44.642 "dma_device_type": 1 00:03:44.642 }, 00:03:44.642 { 00:03:44.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:44.642 "dma_device_type": 2 00:03:44.642 } 00:03:44.642 ], 00:03:44.642 "driver_specific": { 00:03:44.642 "passthru": { 00:03:44.642 "name": "Passthru0", 00:03:44.642 "base_bdev_name": "Malloc0" 00:03:44.642 } 00:03:44.642 } 00:03:44.642 } 00:03:44.642 ]' 00:03:44.642 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:44.642 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:44.642 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.642 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.642 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.642 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:44.642 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:44.642 07:18:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:44.642 00:03:44.642 real 0m0.312s 00:03:44.642 user 0m0.188s 00:03:44.642 sys 0m0.050s 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:44.642 07:18:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.642 ************************************ 00:03:44.642 END TEST rpc_integrity 00:03:44.642 ************************************ 00:03:44.904 07:18:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:44.904 07:18:02 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:44.904 07:18:02 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:44.904 07:18:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:44.904 ************************************ 00:03:44.904 START TEST rpc_plugins 00:03:44.904 ************************************ 00:03:44.904 07:18:02 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:03:44.904 07:18:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:44.904 07:18:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.904 07:18:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:44.904 07:18:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.904 07:18:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:44.904 07:18:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:44.904 07:18:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.904 07:18:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:44.904 07:18:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.904 07:18:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:44.904 { 00:03:44.904 "name": "Malloc1", 00:03:44.904 "aliases": [ 00:03:44.904 "f888c942-6cd3-42e8-944e-60b6a1b1cd1d" 00:03:44.904 ], 00:03:44.904 "product_name": "Malloc disk", 00:03:44.904 "block_size": 4096, 00:03:44.904 "num_blocks": 256, 00:03:44.904 "uuid": "f888c942-6cd3-42e8-944e-60b6a1b1cd1d", 00:03:44.904 "assigned_rate_limits": { 00:03:44.904 "rw_ios_per_sec": 0, 00:03:44.904 "rw_mbytes_per_sec": 0, 00:03:44.904 "r_mbytes_per_sec": 0, 00:03:44.904 "w_mbytes_per_sec": 0 00:03:44.904 }, 00:03:44.904 "claimed": false, 00:03:44.904 "zoned": false, 00:03:44.904 "supported_io_types": { 00:03:44.904 "read": true, 00:03:44.904 "write": true, 00:03:44.904 "unmap": true, 00:03:44.904 "flush": true, 00:03:44.904 "reset": true, 00:03:44.904 "nvme_admin": false, 00:03:44.904 "nvme_io": false, 00:03:44.904 "nvme_io_md": false, 00:03:44.904 "write_zeroes": true, 00:03:44.904 "zcopy": true, 00:03:44.904 "get_zone_info": false, 00:03:44.904 "zone_management": false, 00:03:44.904 "zone_append": false, 00:03:44.904 "compare": false, 00:03:44.904 "compare_and_write": false, 00:03:44.904 "abort": true, 00:03:44.904 "seek_hole": false, 00:03:44.904 "seek_data": false, 00:03:44.904 "copy": true, 00:03:44.904 "nvme_iov_md": false 00:03:44.904 }, 00:03:44.904 "memory_domains": [ 00:03:44.904 { 00:03:44.904 "dma_device_id": "system", 00:03:44.904 "dma_device_type": 1 00:03:44.904 }, 00:03:44.904 { 00:03:44.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:44.904 "dma_device_type": 2 00:03:44.904 } 00:03:44.904 ], 00:03:44.904 "driver_specific": {} 00:03:44.904 } 00:03:44.904 ]' 00:03:44.904 07:18:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:44.904 07:18:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:44.904 07:18:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:44.904 07:18:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.904 07:18:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:44.904 07:18:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.904 07:18:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:44.904 07:18:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.904 07:18:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:44.904 07:18:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.904 07:18:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:44.904 07:18:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:44.904 07:18:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:44.904 00:03:44.904 real 0m0.149s 00:03:44.904 user 0m0.089s 00:03:44.904 sys 0m0.024s 00:03:44.904 07:18:03 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:44.904 07:18:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:44.904 ************************************ 00:03:44.904 END TEST rpc_plugins 00:03:44.904 ************************************ 00:03:44.904 07:18:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:44.904 07:18:03 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:44.904 07:18:03 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:44.904 07:18:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.165 ************************************ 00:03:45.165 START TEST rpc_trace_cmd_test 00:03:45.165 ************************************ 00:03:45.165 07:18:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:03:45.165 07:18:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:45.165 07:18:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:45.165 07:18:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:45.165 07:18:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:45.165 07:18:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:45.165 07:18:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:45.165 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3155736", 00:03:45.165 "tpoint_group_mask": "0x8", 00:03:45.165 "iscsi_conn": { 00:03:45.165 "mask": "0x2", 00:03:45.165 "tpoint_mask": "0x0" 00:03:45.165 }, 00:03:45.165 "scsi": { 00:03:45.165 "mask": "0x4", 00:03:45.165 "tpoint_mask": "0x0" 00:03:45.165 }, 00:03:45.165 "bdev": { 00:03:45.165 "mask": "0x8", 00:03:45.165 "tpoint_mask": "0xffffffffffffffff" 00:03:45.165 }, 00:03:45.165 "nvmf_rdma": { 00:03:45.165 "mask": "0x10", 00:03:45.165 "tpoint_mask": "0x0" 00:03:45.165 }, 00:03:45.165 "nvmf_tcp": { 00:03:45.165 "mask": "0x20", 00:03:45.165 "tpoint_mask": "0x0" 00:03:45.165 }, 00:03:45.165 "ftl": { 00:03:45.165 "mask": "0x40", 00:03:45.165 "tpoint_mask": "0x0" 00:03:45.165 }, 00:03:45.165 "blobfs": { 00:03:45.165 "mask": "0x80", 00:03:45.165 "tpoint_mask": "0x0" 00:03:45.165 }, 00:03:45.165 "dsa": { 00:03:45.165 "mask": "0x200", 00:03:45.165 "tpoint_mask": "0x0" 00:03:45.165 }, 00:03:45.165 "thread": { 00:03:45.165 "mask": "0x400", 00:03:45.165 "tpoint_mask": "0x0" 00:03:45.165 }, 00:03:45.165 "nvme_pcie": { 00:03:45.165 "mask": "0x800", 00:03:45.165 "tpoint_mask": "0x0" 00:03:45.165 }, 00:03:45.165 "iaa": { 00:03:45.165 "mask": "0x1000", 00:03:45.165 "tpoint_mask": "0x0" 00:03:45.165 }, 00:03:45.165 "nvme_tcp": { 00:03:45.165 "mask": "0x2000", 00:03:45.165 "tpoint_mask": "0x0" 00:03:45.165 }, 00:03:45.165 "bdev_nvme": { 00:03:45.165 "mask": "0x4000", 00:03:45.165 "tpoint_mask": "0x0" 00:03:45.165 }, 00:03:45.165 "sock": { 00:03:45.165 "mask": "0x8000", 00:03:45.165 "tpoint_mask": "0x0" 00:03:45.165 }, 00:03:45.165 "blob": { 00:03:45.165 "mask": "0x10000", 00:03:45.165 "tpoint_mask": "0x0" 00:03:45.165 }, 00:03:45.165 "bdev_raid": { 00:03:45.165 "mask": "0x20000", 00:03:45.165 "tpoint_mask": "0x0" 00:03:45.165 }, 00:03:45.165 "scheduler": { 00:03:45.165 "mask": "0x40000", 00:03:45.165 "tpoint_mask": "0x0" 00:03:45.165 } 00:03:45.165 }' 00:03:45.165 07:18:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:45.165 07:18:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:45.165 07:18:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:45.165 07:18:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:45.165 07:18:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:45.165 07:18:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:45.165 07:18:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:45.165 07:18:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:45.165 07:18:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:45.427 07:18:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:45.427 00:03:45.427 real 0m0.248s 00:03:45.427 user 0m0.205s 00:03:45.427 sys 0m0.036s 00:03:45.427 07:18:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:45.427 07:18:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:45.427 ************************************ 00:03:45.427 END TEST rpc_trace_cmd_test 00:03:45.427 ************************************ 00:03:45.427 07:18:03 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:45.427 07:18:03 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:45.427 07:18:03 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:45.427 07:18:03 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:45.427 07:18:03 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:45.427 07:18:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.427 ************************************ 00:03:45.427 START TEST rpc_daemon_integrity 00:03:45.427 ************************************ 00:03:45.427 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:45.427 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:45.427 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:45.427 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.427 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:45.427 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:45.427 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:45.428 { 00:03:45.428 "name": "Malloc2", 00:03:45.428 "aliases": [ 00:03:45.428 "4e0ab923-0958-4553-be29-5bb22f4926e8" 00:03:45.428 ], 00:03:45.428 "product_name": "Malloc disk", 00:03:45.428 "block_size": 512, 00:03:45.428 "num_blocks": 16384, 00:03:45.428 "uuid": "4e0ab923-0958-4553-be29-5bb22f4926e8", 00:03:45.428 "assigned_rate_limits": { 00:03:45.428 "rw_ios_per_sec": 0, 00:03:45.428 "rw_mbytes_per_sec": 0, 00:03:45.428 "r_mbytes_per_sec": 0, 00:03:45.428 "w_mbytes_per_sec": 0 00:03:45.428 }, 00:03:45.428 "claimed": false, 00:03:45.428 "zoned": false, 00:03:45.428 "supported_io_types": { 00:03:45.428 "read": true, 00:03:45.428 "write": true, 00:03:45.428 "unmap": true, 00:03:45.428 "flush": true, 00:03:45.428 "reset": true, 00:03:45.428 "nvme_admin": false, 00:03:45.428 "nvme_io": false, 00:03:45.428 "nvme_io_md": false, 00:03:45.428 "write_zeroes": true, 00:03:45.428 "zcopy": true, 00:03:45.428 "get_zone_info": false, 00:03:45.428 "zone_management": false, 00:03:45.428 "zone_append": false, 00:03:45.428 "compare": false, 00:03:45.428 "compare_and_write": false, 00:03:45.428 "abort": true, 00:03:45.428 "seek_hole": false, 00:03:45.428 "seek_data": false, 00:03:45.428 "copy": true, 00:03:45.428 "nvme_iov_md": false 00:03:45.428 }, 00:03:45.428 "memory_domains": [ 00:03:45.428 { 00:03:45.428 "dma_device_id": "system", 00:03:45.428 "dma_device_type": 1 00:03:45.428 }, 00:03:45.428 { 00:03:45.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:45.428 "dma_device_type": 2 00:03:45.428 } 00:03:45.428 ], 00:03:45.428 "driver_specific": {} 00:03:45.428 } 00:03:45.428 ]' 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.428 [2024-11-20 07:18:03.599272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:45.428 [2024-11-20 07:18:03.599315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:45.428 [2024-11-20 07:18:03.599330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ca3550 00:03:45.428 [2024-11-20 07:18:03.599338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:45.428 [2024-11-20 07:18:03.600901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:45.428 [2024-11-20 07:18:03.600936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:45.428 Passthru0 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:45.428 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:45.428 { 00:03:45.428 "name": "Malloc2", 00:03:45.428 "aliases": [ 00:03:45.428 "4e0ab923-0958-4553-be29-5bb22f4926e8" 00:03:45.428 ], 00:03:45.428 "product_name": "Malloc disk", 00:03:45.428 "block_size": 512, 00:03:45.428 "num_blocks": 16384, 00:03:45.428 "uuid": "4e0ab923-0958-4553-be29-5bb22f4926e8", 00:03:45.428 "assigned_rate_limits": { 00:03:45.428 "rw_ios_per_sec": 0, 00:03:45.428 "rw_mbytes_per_sec": 0, 00:03:45.428 "r_mbytes_per_sec": 0, 00:03:45.428 "w_mbytes_per_sec": 0 00:03:45.428 }, 00:03:45.428 "claimed": true, 00:03:45.428 "claim_type": "exclusive_write", 00:03:45.428 "zoned": false, 00:03:45.428 "supported_io_types": { 00:03:45.428 "read": true, 00:03:45.428 "write": true, 00:03:45.428 "unmap": true, 00:03:45.428 "flush": true, 00:03:45.428 "reset": true, 00:03:45.428 "nvme_admin": false, 00:03:45.428 "nvme_io": false, 00:03:45.428 "nvme_io_md": false, 00:03:45.428 "write_zeroes": true, 00:03:45.428 "zcopy": true, 00:03:45.428 "get_zone_info": false, 00:03:45.428 "zone_management": false, 00:03:45.428 "zone_append": false, 00:03:45.428 "compare": false, 00:03:45.428 "compare_and_write": false, 00:03:45.428 "abort": true, 00:03:45.428 "seek_hole": false, 00:03:45.428 "seek_data": false, 00:03:45.428 "copy": true, 00:03:45.428 "nvme_iov_md": false 00:03:45.428 }, 00:03:45.428 "memory_domains": [ 00:03:45.428 { 00:03:45.428 "dma_device_id": "system", 00:03:45.428 "dma_device_type": 1 00:03:45.428 }, 00:03:45.428 { 00:03:45.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:45.428 "dma_device_type": 2 00:03:45.428 } 00:03:45.428 ], 00:03:45.428 "driver_specific": {} 00:03:45.428 }, 00:03:45.428 { 00:03:45.428 "name": "Passthru0", 00:03:45.428 "aliases": [ 00:03:45.428 "0d24c602-14da-50df-b488-14efb1b80c6a" 00:03:45.428 ], 00:03:45.428 "product_name": "passthru", 00:03:45.428 "block_size": 512, 00:03:45.428 "num_blocks": 16384, 00:03:45.428 "uuid": "0d24c602-14da-50df-b488-14efb1b80c6a", 00:03:45.428 "assigned_rate_limits": { 00:03:45.428 "rw_ios_per_sec": 0, 00:03:45.428 "rw_mbytes_per_sec": 0, 00:03:45.428 "r_mbytes_per_sec": 0, 00:03:45.428 "w_mbytes_per_sec": 0 00:03:45.428 }, 00:03:45.428 "claimed": false, 00:03:45.428 "zoned": false, 00:03:45.428 "supported_io_types": { 00:03:45.428 "read": true, 00:03:45.428 "write": true, 00:03:45.428 "unmap": true, 00:03:45.428 "flush": true, 00:03:45.428 "reset": true, 00:03:45.428 "nvme_admin": false, 00:03:45.428 "nvme_io": false, 00:03:45.428 "nvme_io_md": false, 00:03:45.428 "write_zeroes": true, 00:03:45.428 "zcopy": true, 00:03:45.428 "get_zone_info": false, 00:03:45.428 "zone_management": false, 00:03:45.428 "zone_append": false, 00:03:45.429 "compare": false, 00:03:45.429 "compare_and_write": false, 00:03:45.429 "abort": true, 00:03:45.429 "seek_hole": false, 00:03:45.429 "seek_data": false, 00:03:45.429 "copy": true, 00:03:45.429 "nvme_iov_md": false 00:03:45.429 }, 00:03:45.429 "memory_domains": [ 00:03:45.429 { 00:03:45.429 "dma_device_id": "system", 00:03:45.429 "dma_device_type": 1 00:03:45.429 }, 00:03:45.429 { 00:03:45.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:45.429 "dma_device_type": 2 00:03:45.429 } 00:03:45.429 ], 00:03:45.429 "driver_specific": { 00:03:45.429 "passthru": { 00:03:45.429 "name": "Passthru0", 00:03:45.429 "base_bdev_name": "Malloc2" 00:03:45.429 } 00:03:45.429 } 00:03:45.429 } 00:03:45.429 ]' 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:45.691 00:03:45.691 real 0m0.311s 00:03:45.691 user 0m0.185s 00:03:45.691 sys 0m0.056s 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:45.691 07:18:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:45.691 ************************************ 00:03:45.691 END TEST rpc_daemon_integrity 00:03:45.691 ************************************ 00:03:45.691 07:18:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:45.691 07:18:03 rpc -- rpc/rpc.sh@84 -- # killprocess 3155736 00:03:45.691 07:18:03 rpc -- common/autotest_common.sh@952 -- # '[' -z 3155736 ']' 00:03:45.691 07:18:03 rpc -- common/autotest_common.sh@956 -- # kill -0 3155736 00:03:45.691 07:18:03 rpc -- common/autotest_common.sh@957 -- # uname 00:03:45.691 07:18:03 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:45.691 07:18:03 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3155736 00:03:45.691 07:18:03 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:45.691 07:18:03 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:45.691 07:18:03 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3155736' 00:03:45.691 killing process with pid 3155736 00:03:45.691 07:18:03 rpc -- common/autotest_common.sh@971 -- # kill 3155736 00:03:45.691 07:18:03 rpc -- common/autotest_common.sh@976 -- # wait 3155736 00:03:45.952 00:03:45.952 real 0m2.733s 00:03:45.952 user 0m3.466s 00:03:45.952 sys 0m0.860s 00:03:45.952 07:18:04 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:45.952 07:18:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.952 ************************************ 00:03:45.952 END TEST rpc 00:03:45.952 ************************************ 00:03:46.213 07:18:04 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:46.213 07:18:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:46.213 07:18:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:46.213 07:18:04 -- common/autotest_common.sh@10 -- # set +x 00:03:46.213 ************************************ 00:03:46.213 START TEST skip_rpc 00:03:46.213 ************************************ 00:03:46.213 07:18:04 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:46.213 * Looking for test storage... 00:03:46.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:46.213 07:18:04 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:46.213 07:18:04 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:46.213 07:18:04 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:46.213 07:18:04 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.213 07:18:04 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:46.213 07:18:04 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.213 07:18:04 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:46.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.213 --rc genhtml_branch_coverage=1 00:03:46.213 --rc genhtml_function_coverage=1 00:03:46.213 --rc genhtml_legend=1 00:03:46.213 --rc geninfo_all_blocks=1 00:03:46.213 --rc geninfo_unexecuted_blocks=1 00:03:46.213 00:03:46.213 ' 00:03:46.213 07:18:04 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:46.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.213 --rc genhtml_branch_coverage=1 00:03:46.213 --rc genhtml_function_coverage=1 00:03:46.213 --rc genhtml_legend=1 00:03:46.213 --rc geninfo_all_blocks=1 00:03:46.213 --rc geninfo_unexecuted_blocks=1 00:03:46.213 00:03:46.213 ' 00:03:46.213 07:18:04 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:46.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.213 --rc genhtml_branch_coverage=1 00:03:46.213 --rc genhtml_function_coverage=1 00:03:46.213 --rc genhtml_legend=1 00:03:46.213 --rc geninfo_all_blocks=1 00:03:46.213 --rc geninfo_unexecuted_blocks=1 00:03:46.213 00:03:46.213 ' 00:03:46.213 07:18:04 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:46.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.213 --rc genhtml_branch_coverage=1 00:03:46.213 --rc genhtml_function_coverage=1 00:03:46.213 --rc genhtml_legend=1 00:03:46.213 --rc geninfo_all_blocks=1 00:03:46.213 --rc geninfo_unexecuted_blocks=1 00:03:46.213 00:03:46.213 ' 00:03:46.213 07:18:04 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:46.213 07:18:04 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:46.213 07:18:04 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:46.213 07:18:04 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:46.213 07:18:04 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:46.213 07:18:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.474 ************************************ 00:03:46.474 START TEST skip_rpc 00:03:46.474 ************************************ 00:03:46.474 07:18:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:03:46.474 07:18:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3156604 00:03:46.474 07:18:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:46.474 07:18:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:46.474 07:18:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:46.474 [2024-11-20 07:18:04.511912] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:03:46.474 [2024-11-20 07:18:04.511972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3156604 ] 00:03:46.474 [2024-11-20 07:18:04.604469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.474 [2024-11-20 07:18:04.657053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3156604 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 3156604 ']' 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 3156604 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3156604 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3156604' 00:03:51.763 killing process with pid 3156604 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 3156604 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 3156604 00:03:51.763 00:03:51.763 real 0m5.262s 00:03:51.763 user 0m4.995s 00:03:51.763 sys 0m0.310s 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:51.763 07:18:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.763 ************************************ 00:03:51.763 END TEST skip_rpc 00:03:51.763 ************************************ 00:03:51.763 07:18:09 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:51.763 07:18:09 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:51.763 07:18:09 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:51.763 07:18:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.763 ************************************ 00:03:51.763 START TEST skip_rpc_with_json 00:03:51.763 ************************************ 00:03:51.763 07:18:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:03:51.763 07:18:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:51.763 07:18:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3157763 00:03:51.763 07:18:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:51.763 07:18:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3157763 00:03:51.763 07:18:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:51.763 07:18:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 3157763 ']' 00:03:51.763 07:18:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:51.763 07:18:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:51.763 07:18:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:51.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:51.763 07:18:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:51.763 07:18:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:51.763 [2024-11-20 07:18:09.843792] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:03:51.763 [2024-11-20 07:18:09.843854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3157763 ] 00:03:51.763 [2024-11-20 07:18:09.932529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.023 [2024-11-20 07:18:09.972190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.595 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:52.595 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:03:52.595 07:18:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:52.595 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.595 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:52.595 [2024-11-20 07:18:10.658582] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:52.595 request: 00:03:52.595 { 00:03:52.595 "trtype": "tcp", 00:03:52.595 "method": "nvmf_get_transports", 00:03:52.595 "req_id": 1 00:03:52.595 } 00:03:52.595 Got JSON-RPC error response 00:03:52.595 response: 00:03:52.595 { 00:03:52.595 "code": -19, 00:03:52.595 "message": "No such device" 00:03:52.595 } 00:03:52.595 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:52.595 07:18:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:52.595 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.595 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:52.595 [2024-11-20 07:18:10.670681] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:52.595 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.595 07:18:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:52.595 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.595 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:52.856 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.856 07:18:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:52.856 { 00:03:52.856 "subsystems": [ 00:03:52.856 { 00:03:52.856 "subsystem": "fsdev", 00:03:52.856 "config": [ 00:03:52.856 { 00:03:52.856 "method": "fsdev_set_opts", 00:03:52.856 "params": { 00:03:52.856 "fsdev_io_pool_size": 65535, 00:03:52.856 "fsdev_io_cache_size": 256 00:03:52.856 } 00:03:52.856 } 00:03:52.856 ] 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "subsystem": "vfio_user_target", 00:03:52.856 "config": null 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "subsystem": "keyring", 00:03:52.856 "config": [] 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "subsystem": "iobuf", 00:03:52.856 "config": [ 00:03:52.856 { 00:03:52.856 "method": "iobuf_set_options", 00:03:52.856 "params": { 00:03:52.856 "small_pool_count": 8192, 00:03:52.856 "large_pool_count": 1024, 00:03:52.856 "small_bufsize": 8192, 00:03:52.856 "large_bufsize": 135168, 00:03:52.856 "enable_numa": false 00:03:52.856 } 00:03:52.856 } 00:03:52.856 ] 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "subsystem": "sock", 00:03:52.856 "config": [ 00:03:52.856 { 00:03:52.856 "method": "sock_set_default_impl", 00:03:52.856 "params": { 00:03:52.856 "impl_name": "posix" 00:03:52.856 } 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "method": "sock_impl_set_options", 00:03:52.856 "params": { 00:03:52.856 "impl_name": "ssl", 00:03:52.856 "recv_buf_size": 4096, 00:03:52.856 "send_buf_size": 4096, 00:03:52.856 "enable_recv_pipe": true, 00:03:52.856 "enable_quickack": false, 00:03:52.856 "enable_placement_id": 0, 00:03:52.856 "enable_zerocopy_send_server": true, 00:03:52.856 "enable_zerocopy_send_client": false, 00:03:52.856 "zerocopy_threshold": 0, 00:03:52.856 "tls_version": 0, 00:03:52.856 "enable_ktls": false 00:03:52.856 } 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "method": "sock_impl_set_options", 00:03:52.856 "params": { 00:03:52.856 "impl_name": "posix", 00:03:52.856 "recv_buf_size": 2097152, 00:03:52.856 "send_buf_size": 2097152, 00:03:52.856 "enable_recv_pipe": true, 00:03:52.856 "enable_quickack": false, 00:03:52.856 "enable_placement_id": 0, 00:03:52.856 "enable_zerocopy_send_server": true, 00:03:52.856 "enable_zerocopy_send_client": false, 00:03:52.856 "zerocopy_threshold": 0, 00:03:52.856 "tls_version": 0, 00:03:52.856 "enable_ktls": false 00:03:52.856 } 00:03:52.856 } 00:03:52.856 ] 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "subsystem": "vmd", 00:03:52.856 "config": [] 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "subsystem": "accel", 00:03:52.856 "config": [ 00:03:52.856 { 00:03:52.856 "method": "accel_set_options", 00:03:52.856 "params": { 00:03:52.856 "small_cache_size": 128, 00:03:52.856 "large_cache_size": 16, 00:03:52.856 "task_count": 2048, 00:03:52.856 "sequence_count": 2048, 00:03:52.856 "buf_count": 2048 00:03:52.856 } 00:03:52.856 } 00:03:52.856 ] 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "subsystem": "bdev", 00:03:52.856 "config": [ 00:03:52.856 { 00:03:52.856 "method": "bdev_set_options", 00:03:52.856 "params": { 00:03:52.856 "bdev_io_pool_size": 65535, 00:03:52.856 "bdev_io_cache_size": 256, 00:03:52.856 "bdev_auto_examine": true, 00:03:52.856 "iobuf_small_cache_size": 128, 00:03:52.856 "iobuf_large_cache_size": 16 00:03:52.856 } 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "method": "bdev_raid_set_options", 00:03:52.856 "params": { 00:03:52.856 "process_window_size_kb": 1024, 00:03:52.856 "process_max_bandwidth_mb_sec": 0 00:03:52.856 } 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "method": "bdev_iscsi_set_options", 00:03:52.856 "params": { 00:03:52.856 "timeout_sec": 30 00:03:52.856 } 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "method": "bdev_nvme_set_options", 00:03:52.856 "params": { 00:03:52.856 "action_on_timeout": "none", 00:03:52.856 "timeout_us": 0, 00:03:52.856 "timeout_admin_us": 0, 00:03:52.856 "keep_alive_timeout_ms": 10000, 00:03:52.856 "arbitration_burst": 0, 00:03:52.856 "low_priority_weight": 0, 00:03:52.856 "medium_priority_weight": 0, 00:03:52.856 "high_priority_weight": 0, 00:03:52.856 "nvme_adminq_poll_period_us": 10000, 00:03:52.856 "nvme_ioq_poll_period_us": 0, 00:03:52.856 "io_queue_requests": 0, 00:03:52.856 "delay_cmd_submit": true, 00:03:52.856 "transport_retry_count": 4, 00:03:52.856 "bdev_retry_count": 3, 00:03:52.856 "transport_ack_timeout": 0, 00:03:52.856 "ctrlr_loss_timeout_sec": 0, 00:03:52.856 "reconnect_delay_sec": 0, 00:03:52.856 "fast_io_fail_timeout_sec": 0, 00:03:52.856 "disable_auto_failback": false, 00:03:52.856 "generate_uuids": false, 00:03:52.856 "transport_tos": 0, 00:03:52.856 "nvme_error_stat": false, 00:03:52.856 "rdma_srq_size": 0, 00:03:52.856 "io_path_stat": false, 00:03:52.856 "allow_accel_sequence": false, 00:03:52.856 "rdma_max_cq_size": 0, 00:03:52.856 "rdma_cm_event_timeout_ms": 0, 00:03:52.856 "dhchap_digests": [ 00:03:52.856 "sha256", 00:03:52.856 "sha384", 00:03:52.856 "sha512" 00:03:52.856 ], 00:03:52.856 "dhchap_dhgroups": [ 00:03:52.856 "null", 00:03:52.856 "ffdhe2048", 00:03:52.856 "ffdhe3072", 00:03:52.856 "ffdhe4096", 00:03:52.856 "ffdhe6144", 00:03:52.856 "ffdhe8192" 00:03:52.856 ] 00:03:52.856 } 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "method": "bdev_nvme_set_hotplug", 00:03:52.856 "params": { 00:03:52.856 "period_us": 100000, 00:03:52.856 "enable": false 00:03:52.856 } 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "method": "bdev_wait_for_examine" 00:03:52.856 } 00:03:52.856 ] 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "subsystem": "scsi", 00:03:52.856 "config": null 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "subsystem": "scheduler", 00:03:52.856 "config": [ 00:03:52.856 { 00:03:52.856 "method": "framework_set_scheduler", 00:03:52.856 "params": { 00:03:52.856 "name": "static" 00:03:52.856 } 00:03:52.856 } 00:03:52.856 ] 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "subsystem": "vhost_scsi", 00:03:52.856 "config": [] 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "subsystem": "vhost_blk", 00:03:52.856 "config": [] 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "subsystem": "ublk", 00:03:52.856 "config": [] 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "subsystem": "nbd", 00:03:52.856 "config": [] 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "subsystem": "nvmf", 00:03:52.856 "config": [ 00:03:52.856 { 00:03:52.856 "method": "nvmf_set_config", 00:03:52.856 "params": { 00:03:52.856 "discovery_filter": "match_any", 00:03:52.856 "admin_cmd_passthru": { 00:03:52.856 "identify_ctrlr": false 00:03:52.856 }, 00:03:52.856 "dhchap_digests": [ 00:03:52.856 "sha256", 00:03:52.856 "sha384", 00:03:52.856 "sha512" 00:03:52.856 ], 00:03:52.856 "dhchap_dhgroups": [ 00:03:52.856 "null", 00:03:52.856 "ffdhe2048", 00:03:52.856 "ffdhe3072", 00:03:52.856 "ffdhe4096", 00:03:52.856 "ffdhe6144", 00:03:52.856 "ffdhe8192" 00:03:52.856 ] 00:03:52.856 } 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "method": "nvmf_set_max_subsystems", 00:03:52.856 "params": { 00:03:52.856 "max_subsystems": 1024 00:03:52.856 } 00:03:52.856 }, 00:03:52.856 { 00:03:52.856 "method": "nvmf_set_crdt", 00:03:52.856 "params": { 00:03:52.856 "crdt1": 0, 00:03:52.856 "crdt2": 0, 00:03:52.856 "crdt3": 0 00:03:52.857 } 00:03:52.857 }, 00:03:52.857 { 00:03:52.857 "method": "nvmf_create_transport", 00:03:52.857 "params": { 00:03:52.857 "trtype": "TCP", 00:03:52.857 "max_queue_depth": 128, 00:03:52.857 "max_io_qpairs_per_ctrlr": 127, 00:03:52.857 "in_capsule_data_size": 4096, 00:03:52.857 "max_io_size": 131072, 00:03:52.857 "io_unit_size": 131072, 00:03:52.857 "max_aq_depth": 128, 00:03:52.857 "num_shared_buffers": 511, 00:03:52.857 "buf_cache_size": 4294967295, 00:03:52.857 "dif_insert_or_strip": false, 00:03:52.857 "zcopy": false, 00:03:52.857 "c2h_success": true, 00:03:52.857 "sock_priority": 0, 00:03:52.857 "abort_timeout_sec": 1, 00:03:52.857 "ack_timeout": 0, 00:03:52.857 "data_wr_pool_size": 0 00:03:52.857 } 00:03:52.857 } 00:03:52.857 ] 00:03:52.857 }, 00:03:52.857 { 00:03:52.857 "subsystem": "iscsi", 00:03:52.857 "config": [ 00:03:52.857 { 00:03:52.857 "method": "iscsi_set_options", 00:03:52.857 "params": { 00:03:52.857 "node_base": "iqn.2016-06.io.spdk", 00:03:52.857 "max_sessions": 128, 00:03:52.857 "max_connections_per_session": 2, 00:03:52.857 "max_queue_depth": 64, 00:03:52.857 "default_time2wait": 2, 00:03:52.857 "default_time2retain": 20, 00:03:52.857 "first_burst_length": 8192, 00:03:52.857 "immediate_data": true, 00:03:52.857 "allow_duplicated_isid": false, 00:03:52.857 "error_recovery_level": 0, 00:03:52.857 "nop_timeout": 60, 00:03:52.857 "nop_in_interval": 30, 00:03:52.857 "disable_chap": false, 00:03:52.857 "require_chap": false, 00:03:52.857 "mutual_chap": false, 00:03:52.857 "chap_group": 0, 00:03:52.857 "max_large_datain_per_connection": 64, 00:03:52.857 "max_r2t_per_connection": 4, 00:03:52.857 "pdu_pool_size": 36864, 00:03:52.857 "immediate_data_pool_size": 16384, 00:03:52.857 "data_out_pool_size": 2048 00:03:52.857 } 00:03:52.857 } 00:03:52.857 ] 00:03:52.857 } 00:03:52.857 ] 00:03:52.857 } 00:03:52.857 07:18:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:52.857 07:18:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3157763 00:03:52.857 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3157763 ']' 00:03:52.857 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3157763 00:03:52.857 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:52.857 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:52.857 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3157763 00:03:52.857 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:52.857 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:52.857 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3157763' 00:03:52.857 killing process with pid 3157763 00:03:52.857 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3157763 00:03:52.857 07:18:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3157763 00:03:53.119 07:18:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3158328 00:03:53.119 07:18:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:53.119 07:18:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3158328 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3158328 ']' 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3158328 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3158328 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3158328' 00:03:58.403 killing process with pid 3158328 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3158328 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3158328 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:58.403 00:03:58.403 real 0m6.582s 00:03:58.403 user 0m6.490s 00:03:58.403 sys 0m0.578s 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.403 ************************************ 00:03:58.403 END TEST skip_rpc_with_json 00:03:58.403 ************************************ 00:03:58.403 07:18:16 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:58.403 07:18:16 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:58.403 07:18:16 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:58.403 07:18:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.403 ************************************ 00:03:58.403 START TEST skip_rpc_with_delay 00:03:58.403 ************************************ 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:58.403 [2024-11-20 07:18:16.504697] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:58.403 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:58.403 00:03:58.404 real 0m0.079s 00:03:58.404 user 0m0.053s 00:03:58.404 sys 0m0.026s 00:03:58.404 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:58.404 07:18:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:58.404 ************************************ 00:03:58.404 END TEST skip_rpc_with_delay 00:03:58.404 ************************************ 00:03:58.404 07:18:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:58.404 07:18:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:58.404 07:18:16 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:58.404 07:18:16 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:58.404 07:18:16 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:58.404 07:18:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.404 ************************************ 00:03:58.404 START TEST exit_on_failed_rpc_init 00:03:58.404 ************************************ 00:03:58.404 07:18:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:03:58.404 07:18:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3159515 00:03:58.404 07:18:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3159515 00:03:58.404 07:18:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:58.404 07:18:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 3159515 ']' 00:03:58.404 07:18:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:58.404 07:18:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:58.404 07:18:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:58.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:58.404 07:18:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:58.404 07:18:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:58.663 [2024-11-20 07:18:16.659164] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:03:58.663 [2024-11-20 07:18:16.659210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159515 ] 00:03:58.663 [2024-11-20 07:18:16.743484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.663 [2024-11-20 07:18:16.774559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:59.678 [2024-11-20 07:18:17.524101] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:03:59.678 [2024-11-20 07:18:17.524157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159694 ] 00:03:59.678 [2024-11-20 07:18:17.611400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.678 [2024-11-20 07:18:17.647501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:59.678 [2024-11-20 07:18:17.647548] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:59.678 [2024-11-20 07:18:17.647558] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:59.678 [2024-11-20 07:18:17.647565] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3159515 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 3159515 ']' 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 3159515 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3159515 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3159515' 00:03:59.678 killing process with pid 3159515 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 3159515 00:03:59.678 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 3159515 00:03:59.966 00:03:59.966 real 0m1.330s 00:03:59.966 user 0m1.584s 00:03:59.966 sys 0m0.366s 00:03:59.966 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:59.966 07:18:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:59.966 ************************************ 00:03:59.966 END TEST exit_on_failed_rpc_init 00:03:59.966 ************************************ 00:03:59.966 07:18:17 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:59.966 00:03:59.966 real 0m13.778s 00:03:59.966 user 0m13.350s 00:03:59.966 sys 0m1.606s 00:03:59.966 07:18:17 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:59.966 07:18:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.966 ************************************ 00:03:59.966 END TEST skip_rpc 00:03:59.966 ************************************ 00:03:59.966 07:18:18 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:59.966 07:18:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:59.966 07:18:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:59.966 07:18:18 -- common/autotest_common.sh@10 -- # set +x 00:03:59.966 ************************************ 00:03:59.966 START TEST rpc_client 00:03:59.966 ************************************ 00:03:59.966 07:18:18 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:59.966 * Looking for test storage... 00:03:59.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:59.966 07:18:18 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:59.966 07:18:18 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:03:59.966 07:18:18 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:00.228 07:18:18 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.228 07:18:18 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:00.228 07:18:18 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.228 07:18:18 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:00.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.228 --rc genhtml_branch_coverage=1 00:04:00.228 --rc genhtml_function_coverage=1 00:04:00.228 --rc genhtml_legend=1 00:04:00.228 --rc geninfo_all_blocks=1 00:04:00.228 --rc geninfo_unexecuted_blocks=1 00:04:00.228 00:04:00.228 ' 00:04:00.228 07:18:18 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:00.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.228 --rc genhtml_branch_coverage=1 00:04:00.228 --rc genhtml_function_coverage=1 00:04:00.228 --rc genhtml_legend=1 00:04:00.228 --rc geninfo_all_blocks=1 00:04:00.228 --rc geninfo_unexecuted_blocks=1 00:04:00.228 00:04:00.228 ' 00:04:00.228 07:18:18 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:00.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.228 --rc genhtml_branch_coverage=1 00:04:00.228 --rc genhtml_function_coverage=1 00:04:00.228 --rc genhtml_legend=1 00:04:00.228 --rc geninfo_all_blocks=1 00:04:00.228 --rc geninfo_unexecuted_blocks=1 00:04:00.228 00:04:00.228 ' 00:04:00.228 07:18:18 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:00.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.228 --rc genhtml_branch_coverage=1 00:04:00.228 --rc genhtml_function_coverage=1 00:04:00.228 --rc genhtml_legend=1 00:04:00.228 --rc geninfo_all_blocks=1 00:04:00.228 --rc geninfo_unexecuted_blocks=1 00:04:00.228 00:04:00.228 ' 00:04:00.228 07:18:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:00.228 OK 00:04:00.228 07:18:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:00.228 00:04:00.228 real 0m0.224s 00:04:00.228 user 0m0.133s 00:04:00.228 sys 0m0.104s 00:04:00.228 07:18:18 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:00.228 07:18:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:00.228 ************************************ 00:04:00.228 END TEST rpc_client 00:04:00.228 ************************************ 00:04:00.228 07:18:18 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:00.228 07:18:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:00.228 07:18:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:00.228 07:18:18 -- common/autotest_common.sh@10 -- # set +x 00:04:00.228 ************************************ 00:04:00.228 START TEST json_config 00:04:00.228 ************************************ 00:04:00.228 07:18:18 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:00.228 07:18:18 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:00.228 07:18:18 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:00.228 07:18:18 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:00.490 07:18:18 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:00.491 07:18:18 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.491 07:18:18 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.491 07:18:18 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.491 07:18:18 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.491 07:18:18 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.491 07:18:18 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.491 07:18:18 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.491 07:18:18 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.491 07:18:18 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.491 07:18:18 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.491 07:18:18 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.491 07:18:18 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:00.491 07:18:18 json_config -- scripts/common.sh@345 -- # : 1 00:04:00.491 07:18:18 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.491 07:18:18 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.491 07:18:18 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:00.491 07:18:18 json_config -- scripts/common.sh@353 -- # local d=1 00:04:00.491 07:18:18 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.491 07:18:18 json_config -- scripts/common.sh@355 -- # echo 1 00:04:00.491 07:18:18 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.491 07:18:18 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:00.491 07:18:18 json_config -- scripts/common.sh@353 -- # local d=2 00:04:00.491 07:18:18 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.491 07:18:18 json_config -- scripts/common.sh@355 -- # echo 2 00:04:00.491 07:18:18 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.491 07:18:18 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.491 07:18:18 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.491 07:18:18 json_config -- scripts/common.sh@368 -- # return 0 00:04:00.491 07:18:18 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.491 07:18:18 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:00.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.491 --rc genhtml_branch_coverage=1 00:04:00.491 --rc genhtml_function_coverage=1 00:04:00.491 --rc genhtml_legend=1 00:04:00.491 --rc geninfo_all_blocks=1 00:04:00.491 --rc geninfo_unexecuted_blocks=1 00:04:00.491 00:04:00.491 ' 00:04:00.491 07:18:18 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:00.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.491 --rc genhtml_branch_coverage=1 00:04:00.491 --rc genhtml_function_coverage=1 00:04:00.491 --rc genhtml_legend=1 00:04:00.491 --rc geninfo_all_blocks=1 00:04:00.491 --rc geninfo_unexecuted_blocks=1 00:04:00.491 00:04:00.491 ' 00:04:00.491 07:18:18 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:00.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.491 --rc genhtml_branch_coverage=1 00:04:00.491 --rc genhtml_function_coverage=1 00:04:00.491 --rc genhtml_legend=1 00:04:00.491 --rc geninfo_all_blocks=1 00:04:00.491 --rc geninfo_unexecuted_blocks=1 00:04:00.491 00:04:00.491 ' 00:04:00.491 07:18:18 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:00.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.491 --rc genhtml_branch_coverage=1 00:04:00.491 --rc genhtml_function_coverage=1 00:04:00.491 --rc genhtml_legend=1 00:04:00.491 --rc geninfo_all_blocks=1 00:04:00.491 --rc geninfo_unexecuted_blocks=1 00:04:00.491 00:04:00.491 ' 00:04:00.491 07:18:18 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:00.491 07:18:18 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:00.491 07:18:18 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:00.491 07:18:18 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:00.491 07:18:18 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:00.491 07:18:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.491 07:18:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.491 07:18:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.491 07:18:18 json_config -- paths/export.sh@5 -- # export PATH 00:04:00.491 07:18:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@51 -- # : 0 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:00.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:00.491 07:18:18 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:00.491 07:18:18 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:00.491 07:18:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:00.491 07:18:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:00.491 07:18:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:00.491 07:18:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:00.491 07:18:18 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:00.491 07:18:18 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:00.491 07:18:18 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:00.491 07:18:18 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:00.492 07:18:18 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:00.492 07:18:18 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:00.492 07:18:18 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:00.492 07:18:18 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:00.492 07:18:18 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:00.492 07:18:18 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:00.492 07:18:18 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:00.492 INFO: JSON configuration test init 00:04:00.492 07:18:18 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:00.492 07:18:18 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:00.492 07:18:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:00.492 07:18:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.492 07:18:18 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:00.492 07:18:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:00.492 07:18:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.492 07:18:18 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:00.492 07:18:18 json_config -- json_config/common.sh@9 -- # local app=target 00:04:00.492 07:18:18 json_config -- json_config/common.sh@10 -- # shift 00:04:00.492 07:18:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:00.492 07:18:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:00.492 07:18:18 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:00.492 07:18:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:00.492 07:18:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:00.492 07:18:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3159989 00:04:00.492 07:18:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:00.492 Waiting for target to run... 00:04:00.492 07:18:18 json_config -- json_config/common.sh@25 -- # waitforlisten 3159989 /var/tmp/spdk_tgt.sock 00:04:00.492 07:18:18 json_config -- common/autotest_common.sh@833 -- # '[' -z 3159989 ']' 00:04:00.492 07:18:18 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:00.492 07:18:18 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:00.492 07:18:18 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:00.492 07:18:18 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:00.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:00.492 07:18:18 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:00.492 07:18:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.492 [2024-11-20 07:18:18.638985] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:00.492 [2024-11-20 07:18:18.639059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159989 ] 00:04:00.753 [2024-11-20 07:18:18.922376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.753 [2024-11-20 07:18:18.949455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.324 07:18:19 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:01.324 07:18:19 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:01.324 07:18:19 json_config -- json_config/common.sh@26 -- # echo '' 00:04:01.324 00:04:01.324 07:18:19 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:01.324 07:18:19 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:01.324 07:18:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:01.324 07:18:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.324 07:18:19 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:01.324 07:18:19 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:01.324 07:18:19 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:01.324 07:18:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.324 07:18:19 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:01.324 07:18:19 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:01.324 07:18:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:01.892 07:18:20 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:01.892 07:18:20 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:01.892 07:18:20 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:01.892 07:18:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.892 07:18:20 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:01.892 07:18:20 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:01.892 07:18:20 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:01.892 07:18:20 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:01.893 07:18:20 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:01.893 07:18:20 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:01.893 07:18:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:01.893 07:18:20 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@54 -- # sort 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:02.152 07:18:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:02.152 07:18:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:02.152 07:18:20 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:02.152 07:18:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:02.152 07:18:20 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:02.152 07:18:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:02.413 MallocForNvmf0 00:04:02.413 07:18:20 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:02.413 07:18:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:02.674 MallocForNvmf1 00:04:02.674 07:18:20 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:02.674 07:18:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:02.674 [2024-11-20 07:18:20.800119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:02.674 07:18:20 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:02.674 07:18:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:02.934 07:18:20 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:02.934 07:18:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:03.195 07:18:21 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:03.195 07:18:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:03.195 07:18:21 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:03.195 07:18:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:03.457 [2024-11-20 07:18:21.474171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:03.457 07:18:21 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:03.457 07:18:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:03.457 07:18:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.457 07:18:21 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:03.457 07:18:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:03.457 07:18:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.457 07:18:21 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:03.457 07:18:21 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:03.457 07:18:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:03.718 MallocBdevForConfigChangeCheck 00:04:03.718 07:18:21 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:03.718 07:18:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:03.718 07:18:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.718 07:18:21 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:03.718 07:18:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:03.978 07:18:22 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:03.978 INFO: shutting down applications... 00:04:03.978 07:18:22 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:03.978 07:18:22 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:03.978 07:18:22 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:03.978 07:18:22 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:04.550 Calling clear_iscsi_subsystem 00:04:04.550 Calling clear_nvmf_subsystem 00:04:04.550 Calling clear_nbd_subsystem 00:04:04.550 Calling clear_ublk_subsystem 00:04:04.550 Calling clear_vhost_blk_subsystem 00:04:04.550 Calling clear_vhost_scsi_subsystem 00:04:04.550 Calling clear_bdev_subsystem 00:04:04.550 07:18:22 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:04.550 07:18:22 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:04.550 07:18:22 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:04.550 07:18:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:04.550 07:18:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:04.550 07:18:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:04.811 07:18:22 json_config -- json_config/json_config.sh@352 -- # break 00:04:04.811 07:18:22 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:04.811 07:18:22 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:04.811 07:18:22 json_config -- json_config/common.sh@31 -- # local app=target 00:04:04.811 07:18:22 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:04.811 07:18:22 json_config -- json_config/common.sh@35 -- # [[ -n 3159989 ]] 00:04:04.811 07:18:22 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3159989 00:04:04.811 07:18:22 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:04.811 07:18:22 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:04.811 07:18:22 json_config -- json_config/common.sh@41 -- # kill -0 3159989 00:04:04.811 07:18:22 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:05.383 07:18:23 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:05.383 07:18:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:05.383 07:18:23 json_config -- json_config/common.sh@41 -- # kill -0 3159989 00:04:05.383 07:18:23 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:05.383 07:18:23 json_config -- json_config/common.sh@43 -- # break 00:04:05.383 07:18:23 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:05.383 07:18:23 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:05.383 SPDK target shutdown done 00:04:05.383 07:18:23 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:05.383 INFO: relaunching applications... 00:04:05.383 07:18:23 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.383 07:18:23 json_config -- json_config/common.sh@9 -- # local app=target 00:04:05.383 07:18:23 json_config -- json_config/common.sh@10 -- # shift 00:04:05.383 07:18:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:05.383 07:18:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:05.383 07:18:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:05.383 07:18:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:05.383 07:18:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:05.383 07:18:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3161125 00:04:05.383 07:18:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:05.383 Waiting for target to run... 00:04:05.383 07:18:23 json_config -- json_config/common.sh@25 -- # waitforlisten 3161125 /var/tmp/spdk_tgt.sock 00:04:05.383 07:18:23 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.383 07:18:23 json_config -- common/autotest_common.sh@833 -- # '[' -z 3161125 ']' 00:04:05.383 07:18:23 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:05.383 07:18:23 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:05.383 07:18:23 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:05.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:05.383 07:18:23 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:05.383 07:18:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.383 [2024-11-20 07:18:23.507426] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:05.384 [2024-11-20 07:18:23.507488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161125 ] 00:04:05.644 [2024-11-20 07:18:23.843385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.904 [2024-11-20 07:18:23.869036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.166 [2024-11-20 07:18:24.366528] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:06.427 [2024-11-20 07:18:24.398946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:06.427 07:18:24 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:06.427 07:18:24 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:06.427 07:18:24 json_config -- json_config/common.sh@26 -- # echo '' 00:04:06.427 00:04:06.427 07:18:24 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:06.427 07:18:24 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:06.427 INFO: Checking if target configuration is the same... 00:04:06.427 07:18:24 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:06.427 07:18:24 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.427 07:18:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:06.427 + '[' 2 -ne 2 ']' 00:04:06.427 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:06.428 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:06.428 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:06.428 +++ basename /dev/fd/62 00:04:06.428 ++ mktemp /tmp/62.XXX 00:04:06.428 + tmp_file_1=/tmp/62.H2u 00:04:06.428 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.428 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:06.428 + tmp_file_2=/tmp/spdk_tgt_config.json.rdh 00:04:06.428 + ret=0 00:04:06.428 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:06.688 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:06.688 + diff -u /tmp/62.H2u /tmp/spdk_tgt_config.json.rdh 00:04:06.688 + echo 'INFO: JSON config files are the same' 00:04:06.688 INFO: JSON config files are the same 00:04:06.688 + rm /tmp/62.H2u /tmp/spdk_tgt_config.json.rdh 00:04:06.688 + exit 0 00:04:06.688 07:18:24 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:06.688 07:18:24 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:06.688 INFO: changing configuration and checking if this can be detected... 00:04:06.688 07:18:24 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:06.688 07:18:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:06.947 07:18:25 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.948 07:18:25 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:06.948 07:18:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:06.948 + '[' 2 -ne 2 ']' 00:04:06.948 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:06.948 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:06.948 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:06.948 +++ basename /dev/fd/62 00:04:06.948 ++ mktemp /tmp/62.XXX 00:04:06.948 + tmp_file_1=/tmp/62.DYD 00:04:06.948 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.948 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:06.948 + tmp_file_2=/tmp/spdk_tgt_config.json.DQN 00:04:06.948 + ret=0 00:04:06.948 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:07.209 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:07.209 + diff -u /tmp/62.DYD /tmp/spdk_tgt_config.json.DQN 00:04:07.209 + ret=1 00:04:07.209 + echo '=== Start of file: /tmp/62.DYD ===' 00:04:07.209 + cat /tmp/62.DYD 00:04:07.209 + echo '=== End of file: /tmp/62.DYD ===' 00:04:07.209 + echo '' 00:04:07.209 + echo '=== Start of file: /tmp/spdk_tgt_config.json.DQN ===' 00:04:07.209 + cat /tmp/spdk_tgt_config.json.DQN 00:04:07.209 + echo '=== End of file: /tmp/spdk_tgt_config.json.DQN ===' 00:04:07.209 + echo '' 00:04:07.209 + rm /tmp/62.DYD /tmp/spdk_tgt_config.json.DQN 00:04:07.209 + exit 1 00:04:07.209 07:18:25 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:07.209 INFO: configuration change detected. 00:04:07.209 07:18:25 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:07.209 07:18:25 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:07.209 07:18:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:07.209 07:18:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.209 07:18:25 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:07.209 07:18:25 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:07.209 07:18:25 json_config -- json_config/json_config.sh@324 -- # [[ -n 3161125 ]] 00:04:07.209 07:18:25 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:07.209 07:18:25 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:07.209 07:18:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:07.209 07:18:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.209 07:18:25 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:07.209 07:18:25 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:07.209 07:18:25 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:07.209 07:18:25 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:07.470 07:18:25 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:07.470 07:18:25 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:07.470 07:18:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:07.470 07:18:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.470 07:18:25 json_config -- json_config/json_config.sh@330 -- # killprocess 3161125 00:04:07.470 07:18:25 json_config -- common/autotest_common.sh@952 -- # '[' -z 3161125 ']' 00:04:07.470 07:18:25 json_config -- common/autotest_common.sh@956 -- # kill -0 3161125 00:04:07.470 07:18:25 json_config -- common/autotest_common.sh@957 -- # uname 00:04:07.470 07:18:25 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:07.470 07:18:25 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3161125 00:04:07.470 07:18:25 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:07.470 07:18:25 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:07.470 07:18:25 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3161125' 00:04:07.470 killing process with pid 3161125 00:04:07.470 07:18:25 json_config -- common/autotest_common.sh@971 -- # kill 3161125 00:04:07.470 07:18:25 json_config -- common/autotest_common.sh@976 -- # wait 3161125 00:04:07.731 07:18:25 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:07.731 07:18:25 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:07.731 07:18:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:07.731 07:18:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.731 07:18:25 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:07.731 07:18:25 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:07.731 INFO: Success 00:04:07.731 00:04:07.731 real 0m7.471s 00:04:07.731 user 0m9.032s 00:04:07.731 sys 0m1.982s 00:04:07.731 07:18:25 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:07.731 07:18:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.731 ************************************ 00:04:07.731 END TEST json_config 00:04:07.731 ************************************ 00:04:07.731 07:18:25 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:07.731 07:18:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:07.731 07:18:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:07.731 07:18:25 -- common/autotest_common.sh@10 -- # set +x 00:04:07.731 ************************************ 00:04:07.731 START TEST json_config_extra_key 00:04:07.731 ************************************ 00:04:07.731 07:18:25 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:07.992 07:18:25 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:07.992 07:18:25 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:07.992 07:18:25 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:07.992 07:18:26 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.992 07:18:26 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.993 07:18:26 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.993 07:18:26 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:07.993 07:18:26 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.993 07:18:26 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:07.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.993 --rc genhtml_branch_coverage=1 00:04:07.993 --rc genhtml_function_coverage=1 00:04:07.993 --rc genhtml_legend=1 00:04:07.993 --rc geninfo_all_blocks=1 00:04:07.993 --rc geninfo_unexecuted_blocks=1 00:04:07.993 00:04:07.993 ' 00:04:07.993 07:18:26 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:07.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.993 --rc genhtml_branch_coverage=1 00:04:07.993 --rc genhtml_function_coverage=1 00:04:07.993 --rc genhtml_legend=1 00:04:07.993 --rc geninfo_all_blocks=1 00:04:07.993 --rc geninfo_unexecuted_blocks=1 00:04:07.993 00:04:07.993 ' 00:04:07.993 07:18:26 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:07.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.993 --rc genhtml_branch_coverage=1 00:04:07.993 --rc genhtml_function_coverage=1 00:04:07.993 --rc genhtml_legend=1 00:04:07.993 --rc geninfo_all_blocks=1 00:04:07.993 --rc geninfo_unexecuted_blocks=1 00:04:07.993 00:04:07.993 ' 00:04:07.993 07:18:26 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:07.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.993 --rc genhtml_branch_coverage=1 00:04:07.993 --rc genhtml_function_coverage=1 00:04:07.993 --rc genhtml_legend=1 00:04:07.993 --rc geninfo_all_blocks=1 00:04:07.993 --rc geninfo_unexecuted_blocks=1 00:04:07.993 00:04:07.993 ' 00:04:07.993 07:18:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:07.993 07:18:26 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:07.993 07:18:26 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:07.993 07:18:26 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:07.993 07:18:26 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:07.993 07:18:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.993 07:18:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.993 07:18:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.993 07:18:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:07.993 07:18:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:07.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:07.993 07:18:26 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:07.993 07:18:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:07.993 07:18:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:07.993 07:18:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:07.993 07:18:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:07.993 07:18:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:07.993 07:18:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:07.993 07:18:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:07.993 07:18:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:07.993 07:18:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:07.993 07:18:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:07.993 07:18:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:07.993 INFO: launching applications... 00:04:07.993 07:18:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:07.993 07:18:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:07.993 07:18:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:07.993 07:18:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:07.993 07:18:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:07.993 07:18:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:07.993 07:18:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:07.993 07:18:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:07.993 07:18:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3161845 00:04:07.993 07:18:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:07.993 Waiting for target to run... 00:04:07.993 07:18:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3161845 /var/tmp/spdk_tgt.sock 00:04:07.993 07:18:26 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 3161845 ']' 00:04:07.993 07:18:26 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:07.993 07:18:26 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:07.993 07:18:26 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:07.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:07.994 07:18:26 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:07.994 07:18:26 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:07.994 07:18:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:07.994 [2024-11-20 07:18:26.172511] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:07.994 [2024-11-20 07:18:26.172581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161845 ] 00:04:08.565 [2024-11-20 07:18:26.512225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.565 [2024-11-20 07:18:26.538201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.825 07:18:26 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:08.825 07:18:26 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:08.825 07:18:26 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:08.825 00:04:08.825 07:18:26 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:08.825 INFO: shutting down applications... 00:04:08.825 07:18:26 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:08.825 07:18:26 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:08.825 07:18:26 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:08.825 07:18:26 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3161845 ]] 00:04:08.825 07:18:26 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3161845 00:04:08.825 07:18:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:08.825 07:18:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:08.825 07:18:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3161845 00:04:08.825 07:18:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:09.396 07:18:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:09.396 07:18:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:09.396 07:18:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3161845 00:04:09.396 07:18:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:09.396 07:18:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:09.396 07:18:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:09.396 07:18:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:09.396 SPDK target shutdown done 00:04:09.396 07:18:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:09.396 Success 00:04:09.396 00:04:09.396 real 0m1.564s 00:04:09.396 user 0m1.133s 00:04:09.396 sys 0m0.457s 00:04:09.396 07:18:27 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:09.396 07:18:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:09.396 ************************************ 00:04:09.396 END TEST json_config_extra_key 00:04:09.396 ************************************ 00:04:09.396 07:18:27 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:09.396 07:18:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:09.396 07:18:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:09.396 07:18:27 -- common/autotest_common.sh@10 -- # set +x 00:04:09.396 ************************************ 00:04:09.396 START TEST alias_rpc 00:04:09.396 ************************************ 00:04:09.396 07:18:27 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:09.658 * Looking for test storage... 00:04:09.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:09.658 07:18:27 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:09.658 07:18:27 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:09.658 07:18:27 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:09.658 07:18:27 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.658 07:18:27 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:09.658 07:18:27 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.658 07:18:27 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:09.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.658 --rc genhtml_branch_coverage=1 00:04:09.658 --rc genhtml_function_coverage=1 00:04:09.658 --rc genhtml_legend=1 00:04:09.658 --rc geninfo_all_blocks=1 00:04:09.658 --rc geninfo_unexecuted_blocks=1 00:04:09.658 00:04:09.658 ' 00:04:09.658 07:18:27 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:09.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.658 --rc genhtml_branch_coverage=1 00:04:09.658 --rc genhtml_function_coverage=1 00:04:09.658 --rc genhtml_legend=1 00:04:09.658 --rc geninfo_all_blocks=1 00:04:09.658 --rc geninfo_unexecuted_blocks=1 00:04:09.658 00:04:09.658 ' 00:04:09.658 07:18:27 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:09.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.658 --rc genhtml_branch_coverage=1 00:04:09.658 --rc genhtml_function_coverage=1 00:04:09.658 --rc genhtml_legend=1 00:04:09.658 --rc geninfo_all_blocks=1 00:04:09.658 --rc geninfo_unexecuted_blocks=1 00:04:09.658 00:04:09.658 ' 00:04:09.658 07:18:27 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:09.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.658 --rc genhtml_branch_coverage=1 00:04:09.658 --rc genhtml_function_coverage=1 00:04:09.658 --rc genhtml_legend=1 00:04:09.658 --rc geninfo_all_blocks=1 00:04:09.658 --rc geninfo_unexecuted_blocks=1 00:04:09.658 00:04:09.658 ' 00:04:09.658 07:18:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:09.658 07:18:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3162166 00:04:09.658 07:18:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3162166 00:04:09.658 07:18:27 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 3162166 ']' 00:04:09.658 07:18:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.658 07:18:27 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.658 07:18:27 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:09.658 07:18:27 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.658 07:18:27 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:09.658 07:18:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.658 [2024-11-20 07:18:27.828002] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:09.658 [2024-11-20 07:18:27.828079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162166 ] 00:04:09.920 [2024-11-20 07:18:27.917785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.920 [2024-11-20 07:18:27.957205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.491 07:18:28 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:10.491 07:18:28 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:10.491 07:18:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:10.753 07:18:28 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3162166 00:04:10.753 07:18:28 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 3162166 ']' 00:04:10.753 07:18:28 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 3162166 00:04:10.753 07:18:28 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:10.753 07:18:28 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:10.753 07:18:28 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3162166 00:04:10.753 07:18:28 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:10.753 07:18:28 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:10.753 07:18:28 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3162166' 00:04:10.753 killing process with pid 3162166 00:04:10.753 07:18:28 alias_rpc -- common/autotest_common.sh@971 -- # kill 3162166 00:04:10.753 07:18:28 alias_rpc -- common/autotest_common.sh@976 -- # wait 3162166 00:04:11.014 00:04:11.014 real 0m1.521s 00:04:11.014 user 0m1.679s 00:04:11.014 sys 0m0.431s 00:04:11.014 07:18:29 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:11.014 07:18:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.014 ************************************ 00:04:11.014 END TEST alias_rpc 00:04:11.014 ************************************ 00:04:11.014 07:18:29 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:11.014 07:18:29 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:11.014 07:18:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:11.014 07:18:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:11.014 07:18:29 -- common/autotest_common.sh@10 -- # set +x 00:04:11.014 ************************************ 00:04:11.014 START TEST spdkcli_tcp 00:04:11.014 ************************************ 00:04:11.014 07:18:29 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:11.275 * Looking for test storage... 00:04:11.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:11.275 07:18:29 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:11.275 07:18:29 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:11.275 07:18:29 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:11.275 07:18:29 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.275 07:18:29 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:11.275 07:18:29 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.275 07:18:29 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:11.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.275 --rc genhtml_branch_coverage=1 00:04:11.275 --rc genhtml_function_coverage=1 00:04:11.275 --rc genhtml_legend=1 00:04:11.275 --rc geninfo_all_blocks=1 00:04:11.275 --rc geninfo_unexecuted_blocks=1 00:04:11.275 00:04:11.275 ' 00:04:11.275 07:18:29 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:11.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.275 --rc genhtml_branch_coverage=1 00:04:11.276 --rc genhtml_function_coverage=1 00:04:11.276 --rc genhtml_legend=1 00:04:11.276 --rc geninfo_all_blocks=1 00:04:11.276 --rc geninfo_unexecuted_blocks=1 00:04:11.276 00:04:11.276 ' 00:04:11.276 07:18:29 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:11.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.276 --rc genhtml_branch_coverage=1 00:04:11.276 --rc genhtml_function_coverage=1 00:04:11.276 --rc genhtml_legend=1 00:04:11.276 --rc geninfo_all_blocks=1 00:04:11.276 --rc geninfo_unexecuted_blocks=1 00:04:11.276 00:04:11.276 ' 00:04:11.276 07:18:29 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:11.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.276 --rc genhtml_branch_coverage=1 00:04:11.276 --rc genhtml_function_coverage=1 00:04:11.276 --rc genhtml_legend=1 00:04:11.276 --rc geninfo_all_blocks=1 00:04:11.276 --rc geninfo_unexecuted_blocks=1 00:04:11.276 00:04:11.276 ' 00:04:11.276 07:18:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:11.276 07:18:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:11.276 07:18:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:11.276 07:18:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:11.276 07:18:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:11.276 07:18:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:11.276 07:18:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:11.276 07:18:29 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:11.276 07:18:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:11.276 07:18:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3162521 00:04:11.276 07:18:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3162521 00:04:11.276 07:18:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:11.276 07:18:29 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 3162521 ']' 00:04:11.276 07:18:29 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.276 07:18:29 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:11.276 07:18:29 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.276 07:18:29 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:11.276 07:18:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:11.276 [2024-11-20 07:18:29.423565] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:11.276 [2024-11-20 07:18:29.423635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162521 ] 00:04:11.536 [2024-11-20 07:18:29.513119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:11.536 [2024-11-20 07:18:29.556800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.536 [2024-11-20 07:18:29.556826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.107 07:18:30 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:12.107 07:18:30 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:12.107 07:18:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3162725 00:04:12.107 07:18:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:12.107 07:18:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:12.369 [ 00:04:12.369 "bdev_malloc_delete", 00:04:12.369 "bdev_malloc_create", 00:04:12.369 "bdev_null_resize", 00:04:12.369 "bdev_null_delete", 00:04:12.369 "bdev_null_create", 00:04:12.369 "bdev_nvme_cuse_unregister", 00:04:12.369 "bdev_nvme_cuse_register", 00:04:12.369 "bdev_opal_new_user", 00:04:12.369 "bdev_opal_set_lock_state", 00:04:12.369 "bdev_opal_delete", 00:04:12.369 "bdev_opal_get_info", 00:04:12.369 "bdev_opal_create", 00:04:12.369 "bdev_nvme_opal_revert", 00:04:12.369 "bdev_nvme_opal_init", 00:04:12.369 "bdev_nvme_send_cmd", 00:04:12.369 "bdev_nvme_set_keys", 00:04:12.369 "bdev_nvme_get_path_iostat", 00:04:12.369 "bdev_nvme_get_mdns_discovery_info", 00:04:12.369 "bdev_nvme_stop_mdns_discovery", 00:04:12.369 "bdev_nvme_start_mdns_discovery", 00:04:12.369 "bdev_nvme_set_multipath_policy", 00:04:12.369 "bdev_nvme_set_preferred_path", 00:04:12.369 "bdev_nvme_get_io_paths", 00:04:12.369 "bdev_nvme_remove_error_injection", 00:04:12.369 "bdev_nvme_add_error_injection", 00:04:12.369 "bdev_nvme_get_discovery_info", 00:04:12.369 "bdev_nvme_stop_discovery", 00:04:12.369 "bdev_nvme_start_discovery", 00:04:12.369 "bdev_nvme_get_controller_health_info", 00:04:12.369 "bdev_nvme_disable_controller", 00:04:12.369 "bdev_nvme_enable_controller", 00:04:12.369 "bdev_nvme_reset_controller", 00:04:12.369 "bdev_nvme_get_transport_statistics", 00:04:12.369 "bdev_nvme_apply_firmware", 00:04:12.369 "bdev_nvme_detach_controller", 00:04:12.369 "bdev_nvme_get_controllers", 00:04:12.369 "bdev_nvme_attach_controller", 00:04:12.369 "bdev_nvme_set_hotplug", 00:04:12.369 "bdev_nvme_set_options", 00:04:12.369 "bdev_passthru_delete", 00:04:12.369 "bdev_passthru_create", 00:04:12.369 "bdev_lvol_set_parent_bdev", 00:04:12.369 "bdev_lvol_set_parent", 00:04:12.369 "bdev_lvol_check_shallow_copy", 00:04:12.369 "bdev_lvol_start_shallow_copy", 00:04:12.369 "bdev_lvol_grow_lvstore", 00:04:12.369 "bdev_lvol_get_lvols", 00:04:12.369 "bdev_lvol_get_lvstores", 00:04:12.369 "bdev_lvol_delete", 00:04:12.369 "bdev_lvol_set_read_only", 00:04:12.369 "bdev_lvol_resize", 00:04:12.369 "bdev_lvol_decouple_parent", 00:04:12.369 "bdev_lvol_inflate", 00:04:12.369 "bdev_lvol_rename", 00:04:12.369 "bdev_lvol_clone_bdev", 00:04:12.369 "bdev_lvol_clone", 00:04:12.369 "bdev_lvol_snapshot", 00:04:12.369 "bdev_lvol_create", 00:04:12.369 "bdev_lvol_delete_lvstore", 00:04:12.369 "bdev_lvol_rename_lvstore", 00:04:12.369 "bdev_lvol_create_lvstore", 00:04:12.369 "bdev_raid_set_options", 00:04:12.369 "bdev_raid_remove_base_bdev", 00:04:12.369 "bdev_raid_add_base_bdev", 00:04:12.369 "bdev_raid_delete", 00:04:12.369 "bdev_raid_create", 00:04:12.369 "bdev_raid_get_bdevs", 00:04:12.369 "bdev_error_inject_error", 00:04:12.369 "bdev_error_delete", 00:04:12.369 "bdev_error_create", 00:04:12.369 "bdev_split_delete", 00:04:12.369 "bdev_split_create", 00:04:12.369 "bdev_delay_delete", 00:04:12.369 "bdev_delay_create", 00:04:12.369 "bdev_delay_update_latency", 00:04:12.369 "bdev_zone_block_delete", 00:04:12.369 "bdev_zone_block_create", 00:04:12.369 "blobfs_create", 00:04:12.369 "blobfs_detect", 00:04:12.369 "blobfs_set_cache_size", 00:04:12.369 "bdev_aio_delete", 00:04:12.369 "bdev_aio_rescan", 00:04:12.369 "bdev_aio_create", 00:04:12.369 "bdev_ftl_set_property", 00:04:12.369 "bdev_ftl_get_properties", 00:04:12.369 "bdev_ftl_get_stats", 00:04:12.369 "bdev_ftl_unmap", 00:04:12.369 "bdev_ftl_unload", 00:04:12.369 "bdev_ftl_delete", 00:04:12.369 "bdev_ftl_load", 00:04:12.369 "bdev_ftl_create", 00:04:12.369 "bdev_virtio_attach_controller", 00:04:12.369 "bdev_virtio_scsi_get_devices", 00:04:12.369 "bdev_virtio_detach_controller", 00:04:12.369 "bdev_virtio_blk_set_hotplug", 00:04:12.369 "bdev_iscsi_delete", 00:04:12.369 "bdev_iscsi_create", 00:04:12.369 "bdev_iscsi_set_options", 00:04:12.369 "accel_error_inject_error", 00:04:12.369 "ioat_scan_accel_module", 00:04:12.369 "dsa_scan_accel_module", 00:04:12.369 "iaa_scan_accel_module", 00:04:12.369 "vfu_virtio_create_fs_endpoint", 00:04:12.369 "vfu_virtio_create_scsi_endpoint", 00:04:12.369 "vfu_virtio_scsi_remove_target", 00:04:12.369 "vfu_virtio_scsi_add_target", 00:04:12.369 "vfu_virtio_create_blk_endpoint", 00:04:12.369 "vfu_virtio_delete_endpoint", 00:04:12.369 "keyring_file_remove_key", 00:04:12.369 "keyring_file_add_key", 00:04:12.369 "keyring_linux_set_options", 00:04:12.369 "fsdev_aio_delete", 00:04:12.369 "fsdev_aio_create", 00:04:12.369 "iscsi_get_histogram", 00:04:12.369 "iscsi_enable_histogram", 00:04:12.369 "iscsi_set_options", 00:04:12.369 "iscsi_get_auth_groups", 00:04:12.369 "iscsi_auth_group_remove_secret", 00:04:12.369 "iscsi_auth_group_add_secret", 00:04:12.369 "iscsi_delete_auth_group", 00:04:12.369 "iscsi_create_auth_group", 00:04:12.369 "iscsi_set_discovery_auth", 00:04:12.369 "iscsi_get_options", 00:04:12.369 "iscsi_target_node_request_logout", 00:04:12.369 "iscsi_target_node_set_redirect", 00:04:12.369 "iscsi_target_node_set_auth", 00:04:12.369 "iscsi_target_node_add_lun", 00:04:12.369 "iscsi_get_stats", 00:04:12.369 "iscsi_get_connections", 00:04:12.369 "iscsi_portal_group_set_auth", 00:04:12.369 "iscsi_start_portal_group", 00:04:12.369 "iscsi_delete_portal_group", 00:04:12.369 "iscsi_create_portal_group", 00:04:12.369 "iscsi_get_portal_groups", 00:04:12.369 "iscsi_delete_target_node", 00:04:12.369 "iscsi_target_node_remove_pg_ig_maps", 00:04:12.369 "iscsi_target_node_add_pg_ig_maps", 00:04:12.369 "iscsi_create_target_node", 00:04:12.369 "iscsi_get_target_nodes", 00:04:12.369 "iscsi_delete_initiator_group", 00:04:12.369 "iscsi_initiator_group_remove_initiators", 00:04:12.369 "iscsi_initiator_group_add_initiators", 00:04:12.369 "iscsi_create_initiator_group", 00:04:12.369 "iscsi_get_initiator_groups", 00:04:12.369 "nvmf_set_crdt", 00:04:12.369 "nvmf_set_config", 00:04:12.369 "nvmf_set_max_subsystems", 00:04:12.369 "nvmf_stop_mdns_prr", 00:04:12.369 "nvmf_publish_mdns_prr", 00:04:12.369 "nvmf_subsystem_get_listeners", 00:04:12.369 "nvmf_subsystem_get_qpairs", 00:04:12.369 "nvmf_subsystem_get_controllers", 00:04:12.369 "nvmf_get_stats", 00:04:12.369 "nvmf_get_transports", 00:04:12.369 "nvmf_create_transport", 00:04:12.369 "nvmf_get_targets", 00:04:12.369 "nvmf_delete_target", 00:04:12.369 "nvmf_create_target", 00:04:12.369 "nvmf_subsystem_allow_any_host", 00:04:12.369 "nvmf_subsystem_set_keys", 00:04:12.369 "nvmf_subsystem_remove_host", 00:04:12.369 "nvmf_subsystem_add_host", 00:04:12.369 "nvmf_ns_remove_host", 00:04:12.369 "nvmf_ns_add_host", 00:04:12.369 "nvmf_subsystem_remove_ns", 00:04:12.369 "nvmf_subsystem_set_ns_ana_group", 00:04:12.369 "nvmf_subsystem_add_ns", 00:04:12.369 "nvmf_subsystem_listener_set_ana_state", 00:04:12.369 "nvmf_discovery_get_referrals", 00:04:12.369 "nvmf_discovery_remove_referral", 00:04:12.369 "nvmf_discovery_add_referral", 00:04:12.369 "nvmf_subsystem_remove_listener", 00:04:12.369 "nvmf_subsystem_add_listener", 00:04:12.369 "nvmf_delete_subsystem", 00:04:12.369 "nvmf_create_subsystem", 00:04:12.369 "nvmf_get_subsystems", 00:04:12.369 "env_dpdk_get_mem_stats", 00:04:12.369 "nbd_get_disks", 00:04:12.369 "nbd_stop_disk", 00:04:12.369 "nbd_start_disk", 00:04:12.369 "ublk_recover_disk", 00:04:12.369 "ublk_get_disks", 00:04:12.369 "ublk_stop_disk", 00:04:12.369 "ublk_start_disk", 00:04:12.369 "ublk_destroy_target", 00:04:12.369 "ublk_create_target", 00:04:12.369 "virtio_blk_create_transport", 00:04:12.369 "virtio_blk_get_transports", 00:04:12.369 "vhost_controller_set_coalescing", 00:04:12.369 "vhost_get_controllers", 00:04:12.369 "vhost_delete_controller", 00:04:12.369 "vhost_create_blk_controller", 00:04:12.369 "vhost_scsi_controller_remove_target", 00:04:12.369 "vhost_scsi_controller_add_target", 00:04:12.369 "vhost_start_scsi_controller", 00:04:12.369 "vhost_create_scsi_controller", 00:04:12.369 "thread_set_cpumask", 00:04:12.369 "scheduler_set_options", 00:04:12.369 "framework_get_governor", 00:04:12.369 "framework_get_scheduler", 00:04:12.369 "framework_set_scheduler", 00:04:12.369 "framework_get_reactors", 00:04:12.369 "thread_get_io_channels", 00:04:12.369 "thread_get_pollers", 00:04:12.369 "thread_get_stats", 00:04:12.369 "framework_monitor_context_switch", 00:04:12.370 "spdk_kill_instance", 00:04:12.370 "log_enable_timestamps", 00:04:12.370 "log_get_flags", 00:04:12.370 "log_clear_flag", 00:04:12.370 "log_set_flag", 00:04:12.370 "log_get_level", 00:04:12.370 "log_set_level", 00:04:12.370 "log_get_print_level", 00:04:12.370 "log_set_print_level", 00:04:12.370 "framework_enable_cpumask_locks", 00:04:12.370 "framework_disable_cpumask_locks", 00:04:12.370 "framework_wait_init", 00:04:12.370 "framework_start_init", 00:04:12.370 "scsi_get_devices", 00:04:12.370 "bdev_get_histogram", 00:04:12.370 "bdev_enable_histogram", 00:04:12.370 "bdev_set_qos_limit", 00:04:12.370 "bdev_set_qd_sampling_period", 00:04:12.370 "bdev_get_bdevs", 00:04:12.370 "bdev_reset_iostat", 00:04:12.370 "bdev_get_iostat", 00:04:12.370 "bdev_examine", 00:04:12.370 "bdev_wait_for_examine", 00:04:12.370 "bdev_set_options", 00:04:12.370 "accel_get_stats", 00:04:12.370 "accel_set_options", 00:04:12.370 "accel_set_driver", 00:04:12.370 "accel_crypto_key_destroy", 00:04:12.370 "accel_crypto_keys_get", 00:04:12.370 "accel_crypto_key_create", 00:04:12.370 "accel_assign_opc", 00:04:12.370 "accel_get_module_info", 00:04:12.370 "accel_get_opc_assignments", 00:04:12.370 "vmd_rescan", 00:04:12.370 "vmd_remove_device", 00:04:12.370 "vmd_enable", 00:04:12.370 "sock_get_default_impl", 00:04:12.370 "sock_set_default_impl", 00:04:12.370 "sock_impl_set_options", 00:04:12.370 "sock_impl_get_options", 00:04:12.370 "iobuf_get_stats", 00:04:12.370 "iobuf_set_options", 00:04:12.370 "keyring_get_keys", 00:04:12.370 "vfu_tgt_set_base_path", 00:04:12.370 "framework_get_pci_devices", 00:04:12.370 "framework_get_config", 00:04:12.370 "framework_get_subsystems", 00:04:12.370 "fsdev_set_opts", 00:04:12.370 "fsdev_get_opts", 00:04:12.370 "trace_get_info", 00:04:12.370 "trace_get_tpoint_group_mask", 00:04:12.370 "trace_disable_tpoint_group", 00:04:12.370 "trace_enable_tpoint_group", 00:04:12.370 "trace_clear_tpoint_mask", 00:04:12.370 "trace_set_tpoint_mask", 00:04:12.370 "notify_get_notifications", 00:04:12.370 "notify_get_types", 00:04:12.370 "spdk_get_version", 00:04:12.370 "rpc_get_methods" 00:04:12.370 ] 00:04:12.370 07:18:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:12.370 07:18:30 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:12.370 07:18:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:12.370 07:18:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:12.370 07:18:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3162521 00:04:12.370 07:18:30 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 3162521 ']' 00:04:12.370 07:18:30 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 3162521 00:04:12.370 07:18:30 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:12.370 07:18:30 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:12.370 07:18:30 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3162521 00:04:12.370 07:18:30 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:12.370 07:18:30 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:12.370 07:18:30 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3162521' 00:04:12.370 killing process with pid 3162521 00:04:12.370 07:18:30 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 3162521 00:04:12.370 07:18:30 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 3162521 00:04:12.631 00:04:12.631 real 0m1.558s 00:04:12.631 user 0m2.832s 00:04:12.631 sys 0m0.477s 00:04:12.631 07:18:30 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:12.631 07:18:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:12.631 ************************************ 00:04:12.631 END TEST spdkcli_tcp 00:04:12.631 ************************************ 00:04:12.631 07:18:30 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:12.631 07:18:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:12.631 07:18:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:12.631 07:18:30 -- common/autotest_common.sh@10 -- # set +x 00:04:12.631 ************************************ 00:04:12.631 START TEST dpdk_mem_utility 00:04:12.631 ************************************ 00:04:12.631 07:18:30 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:12.893 * Looking for test storage... 00:04:12.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:12.893 07:18:30 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:12.893 07:18:30 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:12.893 07:18:30 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:12.893 07:18:30 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.893 07:18:30 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:12.893 07:18:30 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.893 07:18:30 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:12.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.893 --rc genhtml_branch_coverage=1 00:04:12.893 --rc genhtml_function_coverage=1 00:04:12.893 --rc genhtml_legend=1 00:04:12.894 --rc geninfo_all_blocks=1 00:04:12.894 --rc geninfo_unexecuted_blocks=1 00:04:12.894 00:04:12.894 ' 00:04:12.894 07:18:30 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:12.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.894 --rc genhtml_branch_coverage=1 00:04:12.894 --rc genhtml_function_coverage=1 00:04:12.894 --rc genhtml_legend=1 00:04:12.894 --rc geninfo_all_blocks=1 00:04:12.894 --rc geninfo_unexecuted_blocks=1 00:04:12.894 00:04:12.894 ' 00:04:12.894 07:18:30 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:12.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.894 --rc genhtml_branch_coverage=1 00:04:12.894 --rc genhtml_function_coverage=1 00:04:12.894 --rc genhtml_legend=1 00:04:12.894 --rc geninfo_all_blocks=1 00:04:12.894 --rc geninfo_unexecuted_blocks=1 00:04:12.894 00:04:12.894 ' 00:04:12.894 07:18:30 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:12.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.894 --rc genhtml_branch_coverage=1 00:04:12.894 --rc genhtml_function_coverage=1 00:04:12.894 --rc genhtml_legend=1 00:04:12.894 --rc geninfo_all_blocks=1 00:04:12.894 --rc geninfo_unexecuted_blocks=1 00:04:12.894 00:04:12.894 ' 00:04:12.894 07:18:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:12.894 07:18:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3162878 00:04:12.894 07:18:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3162878 00:04:12.894 07:18:30 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 3162878 ']' 00:04:12.894 07:18:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.894 07:18:30 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.894 07:18:30 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:12.894 07:18:30 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.894 07:18:30 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:12.894 07:18:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:12.894 [2024-11-20 07:18:31.040329] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:12.894 [2024-11-20 07:18:31.040401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162878 ] 00:04:13.156 [2024-11-20 07:18:31.131221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.156 [2024-11-20 07:18:31.166020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.726 07:18:31 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:13.726 07:18:31 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:13.726 07:18:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:13.726 07:18:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:13.726 07:18:31 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.726 07:18:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:13.726 { 00:04:13.726 "filename": "/tmp/spdk_mem_dump.txt" 00:04:13.726 } 00:04:13.726 07:18:31 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.726 07:18:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:13.726 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:13.726 1 heaps totaling size 818.000000 MiB 00:04:13.726 size: 818.000000 MiB heap id: 0 00:04:13.726 end heaps---------- 00:04:13.726 9 mempools totaling size 603.782043 MiB 00:04:13.726 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:13.726 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:13.726 size: 100.555481 MiB name: bdev_io_3162878 00:04:13.726 size: 50.003479 MiB name: msgpool_3162878 00:04:13.726 size: 36.509338 MiB name: fsdev_io_3162878 00:04:13.726 size: 21.763794 MiB name: PDU_Pool 00:04:13.726 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:13.726 size: 4.133484 MiB name: evtpool_3162878 00:04:13.726 size: 0.026123 MiB name: Session_Pool 00:04:13.726 end mempools------- 00:04:13.726 6 memzones totaling size 4.142822 MiB 00:04:13.726 size: 1.000366 MiB name: RG_ring_0_3162878 00:04:13.726 size: 1.000366 MiB name: RG_ring_1_3162878 00:04:13.726 size: 1.000366 MiB name: RG_ring_4_3162878 00:04:13.726 size: 1.000366 MiB name: RG_ring_5_3162878 00:04:13.726 size: 0.125366 MiB name: RG_ring_2_3162878 00:04:13.726 size: 0.015991 MiB name: RG_ring_3_3162878 00:04:13.726 end memzones------- 00:04:13.726 07:18:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:13.987 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:13.987 list of free elements. size: 10.852478 MiB 00:04:13.987 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:13.987 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:13.987 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:13.987 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:13.987 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:13.987 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:13.987 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:13.987 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:13.987 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:13.987 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:13.987 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:13.987 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:13.987 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:13.987 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:13.987 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:13.987 list of standard malloc elements. size: 199.218628 MiB 00:04:13.987 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:13.987 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:13.987 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:13.987 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:13.987 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:13.987 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:13.987 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:13.987 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:13.987 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:13.987 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:13.987 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:13.987 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:13.987 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:13.987 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:13.987 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:13.987 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:13.987 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:13.987 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:13.987 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:13.987 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:13.987 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:13.987 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:13.988 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:13.988 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:13.988 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:13.988 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:13.988 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:13.988 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:13.988 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:13.988 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:13.988 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:13.988 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:13.988 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:13.988 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:13.988 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:13.988 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:13.988 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:13.988 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:13.988 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:13.988 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:13.988 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:13.988 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:13.988 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:13.988 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:13.988 list of memzone associated elements. size: 607.928894 MiB 00:04:13.988 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:13.988 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:13.988 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:13.988 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:13.988 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:13.988 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3162878_0 00:04:13.988 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:13.988 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3162878_0 00:04:13.988 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:13.988 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3162878_0 00:04:13.988 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:13.988 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:13.988 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:13.988 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:13.988 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:13.988 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3162878_0 00:04:13.988 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:13.988 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3162878 00:04:13.988 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:13.988 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3162878 00:04:13.988 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:13.988 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:13.988 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:13.988 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:13.988 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:13.988 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:13.988 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:13.988 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:13.988 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:13.988 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3162878 00:04:13.988 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:13.988 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3162878 00:04:13.988 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:13.988 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3162878 00:04:13.988 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:13.988 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3162878 00:04:13.988 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:13.988 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3162878 00:04:13.988 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:13.988 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3162878 00:04:13.988 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:13.988 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:13.988 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:13.988 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:13.988 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:13.988 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:13.988 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:13.988 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3162878 00:04:13.988 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:13.988 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3162878 00:04:13.988 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:13.988 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:13.988 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:13.988 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:13.988 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:13.988 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3162878 00:04:13.988 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:13.988 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:13.988 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:13.988 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3162878 00:04:13.988 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:13.988 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3162878 00:04:13.988 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:13.988 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3162878 00:04:13.988 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:13.988 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:13.988 07:18:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:13.988 07:18:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3162878 00:04:13.988 07:18:31 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 3162878 ']' 00:04:13.988 07:18:31 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 3162878 00:04:13.988 07:18:31 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:13.988 07:18:31 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:13.988 07:18:31 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3162878 00:04:13.988 07:18:32 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:13.988 07:18:32 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:13.988 07:18:32 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3162878' 00:04:13.988 killing process with pid 3162878 00:04:13.988 07:18:32 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 3162878 00:04:13.988 07:18:32 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 3162878 00:04:13.988 00:04:13.988 real 0m1.405s 00:04:13.988 user 0m1.482s 00:04:13.988 sys 0m0.419s 00:04:13.988 07:18:32 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.988 07:18:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:13.988 ************************************ 00:04:13.988 END TEST dpdk_mem_utility 00:04:13.988 ************************************ 00:04:14.250 07:18:32 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:14.250 07:18:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.250 07:18:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.250 07:18:32 -- common/autotest_common.sh@10 -- # set +x 00:04:14.250 ************************************ 00:04:14.250 START TEST event 00:04:14.250 ************************************ 00:04:14.250 07:18:32 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:14.250 * Looking for test storage... 00:04:14.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:14.250 07:18:32 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:14.250 07:18:32 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:14.250 07:18:32 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:14.250 07:18:32 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:14.250 07:18:32 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.250 07:18:32 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.250 07:18:32 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.250 07:18:32 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.250 07:18:32 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.250 07:18:32 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.250 07:18:32 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.250 07:18:32 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.250 07:18:32 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.250 07:18:32 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.250 07:18:32 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.250 07:18:32 event -- scripts/common.sh@344 -- # case "$op" in 00:04:14.250 07:18:32 event -- scripts/common.sh@345 -- # : 1 00:04:14.250 07:18:32 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.250 07:18:32 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.250 07:18:32 event -- scripts/common.sh@365 -- # decimal 1 00:04:14.250 07:18:32 event -- scripts/common.sh@353 -- # local d=1 00:04:14.250 07:18:32 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.250 07:18:32 event -- scripts/common.sh@355 -- # echo 1 00:04:14.511 07:18:32 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.511 07:18:32 event -- scripts/common.sh@366 -- # decimal 2 00:04:14.511 07:18:32 event -- scripts/common.sh@353 -- # local d=2 00:04:14.511 07:18:32 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.511 07:18:32 event -- scripts/common.sh@355 -- # echo 2 00:04:14.511 07:18:32 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.511 07:18:32 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.511 07:18:32 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.511 07:18:32 event -- scripts/common.sh@368 -- # return 0 00:04:14.511 07:18:32 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.511 07:18:32 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:14.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.511 --rc genhtml_branch_coverage=1 00:04:14.511 --rc genhtml_function_coverage=1 00:04:14.511 --rc genhtml_legend=1 00:04:14.511 --rc geninfo_all_blocks=1 00:04:14.511 --rc geninfo_unexecuted_blocks=1 00:04:14.511 00:04:14.511 ' 00:04:14.511 07:18:32 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:14.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.511 --rc genhtml_branch_coverage=1 00:04:14.511 --rc genhtml_function_coverage=1 00:04:14.511 --rc genhtml_legend=1 00:04:14.511 --rc geninfo_all_blocks=1 00:04:14.511 --rc geninfo_unexecuted_blocks=1 00:04:14.511 00:04:14.511 ' 00:04:14.511 07:18:32 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:14.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.511 --rc genhtml_branch_coverage=1 00:04:14.511 --rc genhtml_function_coverage=1 00:04:14.511 --rc genhtml_legend=1 00:04:14.511 --rc geninfo_all_blocks=1 00:04:14.511 --rc geninfo_unexecuted_blocks=1 00:04:14.511 00:04:14.511 ' 00:04:14.511 07:18:32 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:14.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.511 --rc genhtml_branch_coverage=1 00:04:14.511 --rc genhtml_function_coverage=1 00:04:14.511 --rc genhtml_legend=1 00:04:14.511 --rc geninfo_all_blocks=1 00:04:14.511 --rc geninfo_unexecuted_blocks=1 00:04:14.511 00:04:14.511 ' 00:04:14.511 07:18:32 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:14.511 07:18:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:14.511 07:18:32 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:14.511 07:18:32 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:14.511 07:18:32 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.511 07:18:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:14.511 ************************************ 00:04:14.511 START TEST event_perf 00:04:14.511 ************************************ 00:04:14.511 07:18:32 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:14.511 Running I/O for 1 seconds...[2024-11-20 07:18:32.531584] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:14.511 [2024-11-20 07:18:32.531695] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3163212 ] 00:04:14.511 [2024-11-20 07:18:32.624877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:14.511 [2024-11-20 07:18:32.669607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.511 [2024-11-20 07:18:32.669782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:14.511 [2024-11-20 07:18:32.669879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.511 [2024-11-20 07:18:32.669880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:15.896 Running I/O for 1 seconds... 00:04:15.896 lcore 0: 179156 00:04:15.896 lcore 1: 179159 00:04:15.896 lcore 2: 179155 00:04:15.896 lcore 3: 179157 00:04:15.896 done. 00:04:15.896 00:04:15.896 real 0m1.188s 00:04:15.896 user 0m4.091s 00:04:15.896 sys 0m0.092s 00:04:15.896 07:18:33 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:15.896 07:18:33 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:15.896 ************************************ 00:04:15.896 END TEST event_perf 00:04:15.896 ************************************ 00:04:15.896 07:18:33 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:15.896 07:18:33 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:15.896 07:18:33 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:15.896 07:18:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:15.896 ************************************ 00:04:15.896 START TEST event_reactor 00:04:15.896 ************************************ 00:04:15.896 07:18:33 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:15.896 [2024-11-20 07:18:33.798400] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:15.896 [2024-11-20 07:18:33.798501] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3163561 ] 00:04:15.896 [2024-11-20 07:18:33.895397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.896 [2024-11-20 07:18:33.927061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.838 test_start 00:04:16.838 oneshot 00:04:16.838 tick 100 00:04:16.838 tick 100 00:04:16.838 tick 250 00:04:16.838 tick 100 00:04:16.838 tick 100 00:04:16.838 tick 250 00:04:16.838 tick 100 00:04:16.838 tick 500 00:04:16.838 tick 100 00:04:16.838 tick 100 00:04:16.838 tick 250 00:04:16.838 tick 100 00:04:16.838 tick 100 00:04:16.838 test_end 00:04:16.838 00:04:16.838 real 0m1.177s 00:04:16.838 user 0m1.088s 00:04:16.838 sys 0m0.084s 00:04:16.838 07:18:34 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.838 07:18:34 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:16.838 ************************************ 00:04:16.838 END TEST event_reactor 00:04:16.838 ************************************ 00:04:16.838 07:18:34 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:16.838 07:18:34 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:16.838 07:18:34 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.838 07:18:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:16.838 ************************************ 00:04:16.838 START TEST event_reactor_perf 00:04:16.838 ************************************ 00:04:16.838 07:18:35 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:17.099 [2024-11-20 07:18:35.051777] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:17.099 [2024-11-20 07:18:35.051861] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3163909 ] 00:04:17.099 [2024-11-20 07:18:35.140767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.099 [2024-11-20 07:18:35.171320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.038 test_start 00:04:18.038 test_end 00:04:18.038 Performance: 541345 events per second 00:04:18.038 00:04:18.038 real 0m1.166s 00:04:18.038 user 0m1.093s 00:04:18.038 sys 0m0.071s 00:04:18.038 07:18:36 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:18.038 07:18:36 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:18.038 ************************************ 00:04:18.038 END TEST event_reactor_perf 00:04:18.038 ************************************ 00:04:18.038 07:18:36 event -- event/event.sh@49 -- # uname -s 00:04:18.038 07:18:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:18.038 07:18:36 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:18.038 07:18:36 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:18.038 07:18:36 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:18.038 07:18:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.299 ************************************ 00:04:18.299 START TEST event_scheduler 00:04:18.299 ************************************ 00:04:18.299 07:18:36 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:18.299 * Looking for test storage... 00:04:18.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:18.299 07:18:36 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:18.299 07:18:36 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:18.299 07:18:36 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:18.299 07:18:36 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:18.299 07:18:36 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.299 07:18:36 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.299 07:18:36 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.299 07:18:36 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.299 07:18:36 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.299 07:18:36 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.299 07:18:36 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.299 07:18:36 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.299 07:18:36 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.299 07:18:36 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.299 07:18:36 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.299 07:18:36 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:18.299 07:18:36 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:18.299 07:18:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.300 07:18:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.300 07:18:36 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:18.300 07:18:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:18.300 07:18:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.300 07:18:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:18.300 07:18:36 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.300 07:18:36 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:18.300 07:18:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:18.300 07:18:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.300 07:18:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:18.300 07:18:36 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.300 07:18:36 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.300 07:18:36 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.300 07:18:36 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:18.300 07:18:36 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.300 07:18:36 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:18.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.300 --rc genhtml_branch_coverage=1 00:04:18.300 --rc genhtml_function_coverage=1 00:04:18.300 --rc genhtml_legend=1 00:04:18.300 --rc geninfo_all_blocks=1 00:04:18.300 --rc geninfo_unexecuted_blocks=1 00:04:18.300 00:04:18.300 ' 00:04:18.300 07:18:36 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:18.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.300 --rc genhtml_branch_coverage=1 00:04:18.300 --rc genhtml_function_coverage=1 00:04:18.300 --rc genhtml_legend=1 00:04:18.300 --rc geninfo_all_blocks=1 00:04:18.300 --rc geninfo_unexecuted_blocks=1 00:04:18.300 00:04:18.300 ' 00:04:18.300 07:18:36 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:18.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.300 --rc genhtml_branch_coverage=1 00:04:18.300 --rc genhtml_function_coverage=1 00:04:18.300 --rc genhtml_legend=1 00:04:18.300 --rc geninfo_all_blocks=1 00:04:18.300 --rc geninfo_unexecuted_blocks=1 00:04:18.300 00:04:18.300 ' 00:04:18.300 07:18:36 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:18.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.300 --rc genhtml_branch_coverage=1 00:04:18.300 --rc genhtml_function_coverage=1 00:04:18.300 --rc genhtml_legend=1 00:04:18.300 --rc geninfo_all_blocks=1 00:04:18.300 --rc geninfo_unexecuted_blocks=1 00:04:18.300 00:04:18.300 ' 00:04:18.300 07:18:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:18.300 07:18:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3164216 00:04:18.300 07:18:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.300 07:18:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3164216 00:04:18.300 07:18:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:18.300 07:18:36 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 3164216 ']' 00:04:18.300 07:18:36 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.300 07:18:36 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:18.300 07:18:36 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.300 07:18:36 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:18.300 07:18:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:18.560 [2024-11-20 07:18:36.530126] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:18.560 [2024-11-20 07:18:36.530199] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3164216 ] 00:04:18.560 [2024-11-20 07:18:36.626879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:18.560 [2024-11-20 07:18:36.681885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.560 [2024-11-20 07:18:36.682046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.560 [2024-11-20 07:18:36.682207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:18.560 [2024-11-20 07:18:36.682209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:19.502 07:18:37 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:19.502 07:18:37 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:19.502 07:18:37 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:19.502 07:18:37 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.502 07:18:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:19.502 [2024-11-20 07:18:37.352564] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:19.502 [2024-11-20 07:18:37.352583] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:19.502 [2024-11-20 07:18:37.352593] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:19.502 [2024-11-20 07:18:37.352599] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:19.502 [2024-11-20 07:18:37.352604] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:19.502 07:18:37 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.502 07:18:37 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:19.502 07:18:37 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.502 07:18:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:19.502 [2024-11-20 07:18:37.416130] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:19.502 07:18:37 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.502 07:18:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:19.502 07:18:37 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:19.502 07:18:37 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:19.502 07:18:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:19.502 ************************************ 00:04:19.502 START TEST scheduler_create_thread 00:04:19.502 ************************************ 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.502 2 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.502 3 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.502 4 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.502 5 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.502 6 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.502 7 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.502 07:18:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:19.503 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.503 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.503 8 00:04:19.503 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.503 07:18:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:19.503 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.503 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.503 9 00:04:19.503 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.503 07:18:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:19.503 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.503 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.076 10 00:04:20.076 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.076 07:18:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:20.076 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.076 07:18:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.461 07:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:21.461 07:18:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:21.461 07:18:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:21.461 07:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:21.461 07:18:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.032 07:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.032 07:18:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:22.032 07:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.032 07:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.974 07:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.974 07:18:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:22.974 07:18:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:22.974 07:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.974 07:18:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.546 07:18:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.546 00:04:23.546 real 0m4.226s 00:04:23.546 user 0m0.025s 00:04:23.546 sys 0m0.007s 00:04:23.546 07:18:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:23.546 07:18:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.546 ************************************ 00:04:23.546 END TEST scheduler_create_thread 00:04:23.546 ************************************ 00:04:23.546 07:18:41 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:23.546 07:18:41 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3164216 00:04:23.546 07:18:41 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 3164216 ']' 00:04:23.546 07:18:41 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 3164216 00:04:23.546 07:18:41 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:23.546 07:18:41 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:23.546 07:18:41 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3164216 00:04:23.806 07:18:41 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:23.806 07:18:41 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:23.806 07:18:41 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3164216' 00:04:23.806 killing process with pid 3164216 00:04:23.806 07:18:41 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 3164216 00:04:23.806 07:18:41 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 3164216 00:04:23.806 [2024-11-20 07:18:41.957751] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:24.067 00:04:24.067 real 0m5.837s 00:04:24.067 user 0m12.878s 00:04:24.067 sys 0m0.444s 00:04:24.067 07:18:42 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:24.067 07:18:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.067 ************************************ 00:04:24.067 END TEST event_scheduler 00:04:24.067 ************************************ 00:04:24.067 07:18:42 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:24.067 07:18:42 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:24.067 07:18:42 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:24.067 07:18:42 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.067 07:18:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:24.067 ************************************ 00:04:24.067 START TEST app_repeat 00:04:24.067 ************************************ 00:04:24.067 07:18:42 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:24.067 07:18:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.067 07:18:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.067 07:18:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:24.067 07:18:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.067 07:18:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:24.067 07:18:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:24.067 07:18:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:24.067 07:18:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3165367 00:04:24.067 07:18:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.067 07:18:42 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:24.067 07:18:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3165367' 00:04:24.067 Process app_repeat pid: 3165367 00:04:24.067 07:18:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:24.067 07:18:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:24.067 spdk_app_start Round 0 00:04:24.068 07:18:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3165367 /var/tmp/spdk-nbd.sock 00:04:24.068 07:18:42 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3165367 ']' 00:04:24.068 07:18:42 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.068 07:18:42 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:24.068 07:18:42 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.068 07:18:42 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:24.068 07:18:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:24.068 [2024-11-20 07:18:42.237691] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:24.068 [2024-11-20 07:18:42.237768] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3165367 ] 00:04:24.329 [2024-11-20 07:18:42.323287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.329 [2024-11-20 07:18:42.355137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.329 [2024-11-20 07:18:42.355137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.329 07:18:42 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:24.329 07:18:42 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:24.329 07:18:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.589 Malloc0 00:04:24.589 07:18:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.589 Malloc1 00:04:24.850 07:18:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.850 07:18:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.850 07:18:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.850 07:18:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:24.850 07:18:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.850 07:18:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:24.850 07:18:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.850 07:18:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.850 07:18:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.850 07:18:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:24.850 07:18:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.850 07:18:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:24.850 07:18:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:24.850 07:18:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:24.850 07:18:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.850 07:18:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:24.850 /dev/nbd0 00:04:24.850 07:18:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:24.850 07:18:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:24.850 07:18:43 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:24.850 07:18:43 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:24.850 07:18:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:24.850 07:18:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:24.850 07:18:43 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:24.850 07:18:43 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:24.850 07:18:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:24.850 07:18:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:24.850 07:18:43 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.850 1+0 records in 00:04:24.850 1+0 records out 00:04:24.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285048 s, 14.4 MB/s 00:04:24.850 07:18:43 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.850 07:18:43 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:24.850 07:18:43 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.850 07:18:43 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:24.850 07:18:43 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:24.850 07:18:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.850 07:18:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.850 07:18:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:25.112 /dev/nbd1 00:04:25.112 07:18:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:25.112 07:18:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:25.112 07:18:43 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:25.112 07:18:43 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:25.112 07:18:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:25.112 07:18:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:25.112 07:18:43 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:25.112 07:18:43 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:25.112 07:18:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:25.112 07:18:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:25.112 07:18:43 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.112 1+0 records in 00:04:25.112 1+0 records out 00:04:25.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274929 s, 14.9 MB/s 00:04:25.112 07:18:43 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.112 07:18:43 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:25.112 07:18:43 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.112 07:18:43 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:25.112 07:18:43 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:25.112 07:18:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.112 07:18:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.112 07:18:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.112 07:18:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.112 07:18:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.372 07:18:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:25.372 { 00:04:25.372 "nbd_device": "/dev/nbd0", 00:04:25.372 "bdev_name": "Malloc0" 00:04:25.372 }, 00:04:25.372 { 00:04:25.372 "nbd_device": "/dev/nbd1", 00:04:25.372 "bdev_name": "Malloc1" 00:04:25.372 } 00:04:25.372 ]' 00:04:25.372 07:18:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:25.372 { 00:04:25.372 "nbd_device": "/dev/nbd0", 00:04:25.372 "bdev_name": "Malloc0" 00:04:25.372 }, 00:04:25.372 { 00:04:25.372 "nbd_device": "/dev/nbd1", 00:04:25.373 "bdev_name": "Malloc1" 00:04:25.373 } 00:04:25.373 ]' 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:25.373 /dev/nbd1' 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:25.373 /dev/nbd1' 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:25.373 256+0 records in 00:04:25.373 256+0 records out 00:04:25.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116632 s, 89.9 MB/s 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:25.373 256+0 records in 00:04:25.373 256+0 records out 00:04:25.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118642 s, 88.4 MB/s 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:25.373 256+0 records in 00:04:25.373 256+0 records out 00:04:25.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128532 s, 81.6 MB/s 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.373 07:18:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.633 07:18:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:25.894 07:18:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:25.894 07:18:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:25.894 07:18:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:25.894 07:18:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.894 07:18:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.894 07:18:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:25.894 07:18:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.894 07:18:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.894 07:18:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.894 07:18:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.894 07:18:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.154 07:18:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:26.154 07:18:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:26.154 07:18:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.154 07:18:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:26.154 07:18:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:26.154 07:18:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.154 07:18:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:26.154 07:18:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:26.154 07:18:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:26.154 07:18:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:26.154 07:18:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:26.154 07:18:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:26.154 07:18:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:26.415 07:18:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:26.415 [2024-11-20 07:18:44.488466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.415 [2024-11-20 07:18:44.518159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.415 [2024-11-20 07:18:44.518159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.415 [2024-11-20 07:18:44.547420] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:26.415 [2024-11-20 07:18:44.547445] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:29.714 07:18:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:29.714 07:18:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:29.714 spdk_app_start Round 1 00:04:29.714 07:18:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3165367 /var/tmp/spdk-nbd.sock 00:04:29.714 07:18:47 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3165367 ']' 00:04:29.714 07:18:47 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:29.714 07:18:47 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:29.714 07:18:47 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:29.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:29.714 07:18:47 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:29.714 07:18:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:29.714 07:18:47 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:29.714 07:18:47 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:29.714 07:18:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.714 Malloc0 00:04:29.714 07:18:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.975 Malloc1 00:04:29.975 07:18:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.975 07:18:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.975 07:18:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.975 07:18:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:29.975 07:18:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.975 07:18:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:29.975 07:18:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.975 07:18:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.975 07:18:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.975 07:18:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:29.975 07:18:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.975 07:18:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:29.975 07:18:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:29.975 07:18:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:29.975 07:18:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.975 07:18:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:29.975 /dev/nbd0 00:04:30.235 07:18:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:30.235 07:18:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:30.235 07:18:48 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:30.235 07:18:48 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:30.235 07:18:48 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:30.235 07:18:48 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:30.235 07:18:48 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.236 1+0 records in 00:04:30.236 1+0 records out 00:04:30.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305567 s, 13.4 MB/s 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:30.236 07:18:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.236 07:18:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.236 07:18:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:30.236 /dev/nbd1 00:04:30.236 07:18:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:30.236 07:18:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:30.236 07:18:48 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.496 1+0 records in 00:04:30.496 1+0 records out 00:04:30.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280782 s, 14.6 MB/s 00:04:30.496 07:18:48 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.496 07:18:48 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:30.496 07:18:48 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.496 07:18:48 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:30.496 07:18:48 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:30.496 { 00:04:30.496 "nbd_device": "/dev/nbd0", 00:04:30.496 "bdev_name": "Malloc0" 00:04:30.496 }, 00:04:30.496 { 00:04:30.496 "nbd_device": "/dev/nbd1", 00:04:30.496 "bdev_name": "Malloc1" 00:04:30.496 } 00:04:30.496 ]' 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:30.496 { 00:04:30.496 "nbd_device": "/dev/nbd0", 00:04:30.496 "bdev_name": "Malloc0" 00:04:30.496 }, 00:04:30.496 { 00:04:30.496 "nbd_device": "/dev/nbd1", 00:04:30.496 "bdev_name": "Malloc1" 00:04:30.496 } 00:04:30.496 ]' 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:30.496 /dev/nbd1' 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:30.496 /dev/nbd1' 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:30.496 07:18:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:30.757 256+0 records in 00:04:30.757 256+0 records out 00:04:30.757 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118412 s, 88.6 MB/s 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:30.757 256+0 records in 00:04:30.757 256+0 records out 00:04:30.757 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119372 s, 87.8 MB/s 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:30.757 256+0 records in 00:04:30.757 256+0 records out 00:04:30.757 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130402 s, 80.4 MB/s 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.757 07:18:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:31.017 07:18:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:31.017 07:18:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:31.017 07:18:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:31.017 07:18:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.017 07:18:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.017 07:18:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:31.017 07:18:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.017 07:18:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.017 07:18:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.017 07:18:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.017 07:18:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.278 07:18:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:31.278 07:18:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:31.278 07:18:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.278 07:18:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:31.278 07:18:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:31.278 07:18:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.278 07:18:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:31.278 07:18:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:31.278 07:18:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:31.278 07:18:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:31.278 07:18:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:31.278 07:18:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:31.278 07:18:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:31.537 07:18:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:31.537 [2024-11-20 07:18:49.656359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.537 [2024-11-20 07:18:49.686870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.537 [2024-11-20 07:18:49.687030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.537 [2024-11-20 07:18:49.716797] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:31.537 [2024-11-20 07:18:49.716829] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:34.835 07:18:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:34.836 07:18:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:34.836 spdk_app_start Round 2 00:04:34.836 07:18:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3165367 /var/tmp/spdk-nbd.sock 00:04:34.836 07:18:52 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3165367 ']' 00:04:34.836 07:18:52 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:34.836 07:18:52 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:34.836 07:18:52 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:34.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:34.836 07:18:52 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:34.836 07:18:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:34.836 07:18:52 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:34.836 07:18:52 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:34.836 07:18:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.836 Malloc0 00:04:34.836 07:18:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.096 Malloc1 00:04:35.096 07:18:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.096 07:18:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.096 07:18:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.096 07:18:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:35.096 07:18:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.096 07:18:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:35.096 07:18:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.096 07:18:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.096 07:18:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.096 07:18:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:35.096 07:18:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.096 07:18:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:35.096 07:18:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:35.096 07:18:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:35.096 07:18:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.096 07:18:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:35.357 /dev/nbd0 00:04:35.357 07:18:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:35.357 07:18:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:35.357 07:18:53 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:35.357 07:18:53 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:35.357 07:18:53 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:35.357 07:18:53 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:35.357 07:18:53 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:35.357 07:18:53 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:35.357 07:18:53 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:35.357 07:18:53 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:35.357 07:18:53 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:35.357 1+0 records in 00:04:35.357 1+0 records out 00:04:35.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031503 s, 13.0 MB/s 00:04:35.357 07:18:53 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.357 07:18:53 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:35.357 07:18:53 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.357 07:18:53 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:35.357 07:18:53 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:35.357 07:18:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:35.357 07:18:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.357 07:18:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:35.618 /dev/nbd1 00:04:35.618 07:18:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:35.618 07:18:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:35.618 07:18:53 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:35.618 07:18:53 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:35.618 07:18:53 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:35.618 07:18:53 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:35.618 07:18:53 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:35.618 07:18:53 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:35.618 07:18:53 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:35.618 07:18:53 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:35.618 07:18:53 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:35.618 1+0 records in 00:04:35.618 1+0 records out 00:04:35.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298548 s, 13.7 MB/s 00:04:35.618 07:18:53 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.618 07:18:53 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:35.618 07:18:53 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.618 07:18:53 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:35.618 07:18:53 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:35.618 07:18:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:35.618 07:18:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.618 07:18:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.618 07:18:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.618 07:18:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.618 07:18:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:35.618 { 00:04:35.618 "nbd_device": "/dev/nbd0", 00:04:35.618 "bdev_name": "Malloc0" 00:04:35.618 }, 00:04:35.618 { 00:04:35.618 "nbd_device": "/dev/nbd1", 00:04:35.618 "bdev_name": "Malloc1" 00:04:35.618 } 00:04:35.618 ]' 00:04:35.618 07:18:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:35.618 { 00:04:35.618 "nbd_device": "/dev/nbd0", 00:04:35.618 "bdev_name": "Malloc0" 00:04:35.618 }, 00:04:35.618 { 00:04:35.618 "nbd_device": "/dev/nbd1", 00:04:35.618 "bdev_name": "Malloc1" 00:04:35.618 } 00:04:35.618 ]' 00:04:35.618 07:18:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:35.879 /dev/nbd1' 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:35.879 /dev/nbd1' 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:35.879 256+0 records in 00:04:35.879 256+0 records out 00:04:35.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120907 s, 86.7 MB/s 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:35.879 256+0 records in 00:04:35.879 256+0 records out 00:04:35.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118439 s, 88.5 MB/s 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:35.879 256+0 records in 00:04:35.879 256+0 records out 00:04:35.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129829 s, 80.8 MB/s 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:35.879 07:18:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:35.880 07:18:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.880 07:18:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.140 07:18:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.401 07:18:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:36.401 07:18:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:36.401 07:18:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.401 07:18:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:36.401 07:18:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.401 07:18:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:36.401 07:18:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:36.401 07:18:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:36.401 07:18:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:36.401 07:18:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:36.401 07:18:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:36.401 07:18:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:36.401 07:18:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:36.661 07:18:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:36.661 [2024-11-20 07:18:54.813830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.662 [2024-11-20 07:18:54.843877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.662 [2024-11-20 07:18:54.843887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.922 [2024-11-20 07:18:54.873191] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:36.922 [2024-11-20 07:18:54.873229] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:40.219 07:18:57 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3165367 /var/tmp/spdk-nbd.sock 00:04:40.219 07:18:57 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3165367 ']' 00:04:40.219 07:18:57 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:40.219 07:18:57 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:40.219 07:18:57 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:40.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:40.219 07:18:57 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:40.219 07:18:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:40.219 07:18:57 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:40.219 07:18:57 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:40.219 07:18:57 event.app_repeat -- event/event.sh@39 -- # killprocess 3165367 00:04:40.219 07:18:57 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 3165367 ']' 00:04:40.219 07:18:57 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 3165367 00:04:40.219 07:18:57 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:04:40.219 07:18:57 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:40.219 07:18:57 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3165367 00:04:40.219 07:18:57 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:40.219 07:18:57 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:40.219 07:18:57 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3165367' 00:04:40.219 killing process with pid 3165367 00:04:40.220 07:18:57 event.app_repeat -- common/autotest_common.sh@971 -- # kill 3165367 00:04:40.220 07:18:57 event.app_repeat -- common/autotest_common.sh@976 -- # wait 3165367 00:04:40.220 spdk_app_start is called in Round 0. 00:04:40.220 Shutdown signal received, stop current app iteration 00:04:40.220 Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 reinitialization... 00:04:40.220 spdk_app_start is called in Round 1. 00:04:40.220 Shutdown signal received, stop current app iteration 00:04:40.220 Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 reinitialization... 00:04:40.220 spdk_app_start is called in Round 2. 00:04:40.220 Shutdown signal received, stop current app iteration 00:04:40.220 Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 reinitialization... 00:04:40.220 spdk_app_start is called in Round 3. 00:04:40.220 Shutdown signal received, stop current app iteration 00:04:40.220 07:18:58 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:40.220 07:18:58 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:40.220 00:04:40.220 real 0m15.878s 00:04:40.220 user 0m34.871s 00:04:40.220 sys 0m2.273s 00:04:40.220 07:18:58 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:40.220 07:18:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:40.220 ************************************ 00:04:40.220 END TEST app_repeat 00:04:40.220 ************************************ 00:04:40.220 07:18:58 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:40.220 07:18:58 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:40.220 07:18:58 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:40.220 07:18:58 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:40.220 07:18:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.220 ************************************ 00:04:40.220 START TEST cpu_locks 00:04:40.220 ************************************ 00:04:40.220 07:18:58 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:40.220 * Looking for test storage... 00:04:40.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:40.220 07:18:58 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:40.220 07:18:58 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:40.220 07:18:58 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:40.220 07:18:58 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.220 07:18:58 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:40.220 07:18:58 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.220 07:18:58 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:40.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.220 --rc genhtml_branch_coverage=1 00:04:40.220 --rc genhtml_function_coverage=1 00:04:40.220 --rc genhtml_legend=1 00:04:40.220 --rc geninfo_all_blocks=1 00:04:40.220 --rc geninfo_unexecuted_blocks=1 00:04:40.220 00:04:40.220 ' 00:04:40.220 07:18:58 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:40.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.220 --rc genhtml_branch_coverage=1 00:04:40.220 --rc genhtml_function_coverage=1 00:04:40.220 --rc genhtml_legend=1 00:04:40.220 --rc geninfo_all_blocks=1 00:04:40.220 --rc geninfo_unexecuted_blocks=1 00:04:40.220 00:04:40.220 ' 00:04:40.220 07:18:58 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:40.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.220 --rc genhtml_branch_coverage=1 00:04:40.220 --rc genhtml_function_coverage=1 00:04:40.220 --rc genhtml_legend=1 00:04:40.220 --rc geninfo_all_blocks=1 00:04:40.220 --rc geninfo_unexecuted_blocks=1 00:04:40.220 00:04:40.220 ' 00:04:40.220 07:18:58 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:40.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.220 --rc genhtml_branch_coverage=1 00:04:40.220 --rc genhtml_function_coverage=1 00:04:40.220 --rc genhtml_legend=1 00:04:40.220 --rc geninfo_all_blocks=1 00:04:40.220 --rc geninfo_unexecuted_blocks=1 00:04:40.220 00:04:40.220 ' 00:04:40.220 07:18:58 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:40.220 07:18:58 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:40.220 07:18:58 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:40.220 07:18:58 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:40.220 07:18:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:40.220 07:18:58 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:40.220 07:18:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.220 ************************************ 00:04:40.220 START TEST default_locks 00:04:40.220 ************************************ 00:04:40.220 07:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:04:40.220 07:18:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3168786 00:04:40.220 07:18:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3168786 00:04:40.220 07:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3168786 ']' 00:04:40.220 07:18:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.220 07:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.220 07:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:40.220 07:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.220 07:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:40.220 07:18:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.481 [2024-11-20 07:18:58.462010] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:40.481 [2024-11-20 07:18:58.462063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3168786 ] 00:04:40.481 [2024-11-20 07:18:58.546627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.481 [2024-11-20 07:18:58.587607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.420 07:18:59 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:41.420 07:18:59 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:04:41.420 07:18:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3168786 00:04:41.420 07:18:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:41.420 07:18:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3168786 00:04:41.680 lslocks: write error 00:04:41.680 07:18:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3168786 00:04:41.680 07:18:59 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 3168786 ']' 00:04:41.680 07:18:59 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 3168786 00:04:41.680 07:18:59 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:04:41.680 07:18:59 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:41.680 07:18:59 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3168786 00:04:41.680 07:18:59 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:41.680 07:18:59 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:41.680 07:18:59 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3168786' 00:04:41.680 killing process with pid 3168786 00:04:41.680 07:18:59 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 3168786 00:04:41.680 07:18:59 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 3168786 00:04:41.941 07:19:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3168786 00:04:41.941 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:41.941 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3168786 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3168786 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3168786 ']' 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3168786) - No such process 00:04:41.942 ERROR: process (pid: 3168786) is no longer running 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:41.942 00:04:41.942 real 0m1.680s 00:04:41.942 user 0m1.815s 00:04:41.942 sys 0m0.574s 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.942 07:19:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.942 ************************************ 00:04:41.942 END TEST default_locks 00:04:41.942 ************************************ 00:04:41.942 07:19:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:41.942 07:19:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.942 07:19:00 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.942 07:19:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.221 ************************************ 00:04:42.221 START TEST default_locks_via_rpc 00:04:42.221 ************************************ 00:04:42.221 07:19:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:04:42.221 07:19:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3169173 00:04:42.221 07:19:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.221 07:19:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3169173 00:04:42.221 07:19:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3169173 ']' 00:04:42.221 07:19:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.221 07:19:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:42.221 07:19:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.221 07:19:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:42.221 07:19:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.221 [2024-11-20 07:19:00.212221] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:42.221 [2024-11-20 07:19:00.212278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3169173 ] 00:04:42.221 [2024-11-20 07:19:00.302724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.221 [2024-11-20 07:19:00.336622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.902 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:42.902 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:42.902 07:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:42.902 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.902 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.902 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.902 07:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:42.902 07:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:42.902 07:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:42.902 07:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:42.902 07:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:42.902 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.902 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.902 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.902 07:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3169173 00:04:42.902 07:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3169173 00:04:42.902 07:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:43.502 07:19:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3169173 00:04:43.502 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 3169173 ']' 00:04:43.502 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 3169173 00:04:43.502 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:04:43.502 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:43.502 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3169173 00:04:43.502 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:43.502 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:43.502 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3169173' 00:04:43.502 killing process with pid 3169173 00:04:43.502 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 3169173 00:04:43.502 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 3169173 00:04:43.764 00:04:43.764 real 0m1.630s 00:04:43.764 user 0m1.764s 00:04:43.764 sys 0m0.569s 00:04:43.764 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:43.764 07:19:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.764 ************************************ 00:04:43.764 END TEST default_locks_via_rpc 00:04:43.764 ************************************ 00:04:43.764 07:19:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:43.764 07:19:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:43.764 07:19:01 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:43.764 07:19:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.764 ************************************ 00:04:43.764 START TEST non_locking_app_on_locked_coremask 00:04:43.764 ************************************ 00:04:43.764 07:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:04:43.764 07:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3169547 00:04:43.764 07:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3169547 /var/tmp/spdk.sock 00:04:43.764 07:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.764 07:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3169547 ']' 00:04:43.764 07:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.764 07:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:43.764 07:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.764 07:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:43.764 07:19:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.764 [2024-11-20 07:19:01.916830] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:43.764 [2024-11-20 07:19:01.916885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3169547 ] 00:04:44.024 [2024-11-20 07:19:02.003747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.024 [2024-11-20 07:19:02.036047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.594 07:19:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:44.594 07:19:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:44.594 07:19:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3169705 00:04:44.594 07:19:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3169705 /var/tmp/spdk2.sock 00:04:44.594 07:19:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:44.594 07:19:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3169705 ']' 00:04:44.594 07:19:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.594 07:19:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:44.594 07:19:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.594 07:19:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:44.594 07:19:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.594 [2024-11-20 07:19:02.751371] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:44.594 [2024-11-20 07:19:02.751422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3169705 ] 00:04:44.855 [2024-11-20 07:19:02.838674] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:44.855 [2024-11-20 07:19:02.838695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.855 [2024-11-20 07:19:02.897183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.425 07:19:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:45.425 07:19:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:45.425 07:19:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3169547 00:04:45.425 07:19:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3169547 00:04:45.425 07:19:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:45.995 lslocks: write error 00:04:45.995 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3169547 00:04:45.995 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3169547 ']' 00:04:45.995 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3169547 00:04:45.995 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:45.995 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:45.995 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3169547 00:04:45.995 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:45.995 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:45.995 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3169547' 00:04:45.995 killing process with pid 3169547 00:04:45.995 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3169547 00:04:45.995 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3169547 00:04:46.565 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3169705 00:04:46.565 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3169705 ']' 00:04:46.565 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3169705 00:04:46.565 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:46.565 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:46.565 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3169705 00:04:46.565 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:46.565 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:46.565 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3169705' 00:04:46.565 killing process with pid 3169705 00:04:46.565 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3169705 00:04:46.565 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3169705 00:04:46.825 00:04:46.825 real 0m2.937s 00:04:46.825 user 0m3.263s 00:04:46.825 sys 0m0.901s 00:04:46.825 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:46.825 07:19:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.825 ************************************ 00:04:46.825 END TEST non_locking_app_on_locked_coremask 00:04:46.825 ************************************ 00:04:46.825 07:19:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:46.825 07:19:04 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:46.825 07:19:04 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.825 07:19:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.825 ************************************ 00:04:46.825 START TEST locking_app_on_unlocked_coremask 00:04:46.825 ************************************ 00:04:46.825 07:19:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:04:46.825 07:19:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3170084 00:04:46.825 07:19:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3170084 /var/tmp/spdk.sock 00:04:46.825 07:19:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:46.825 07:19:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3170084 ']' 00:04:46.825 07:19:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.825 07:19:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:46.825 07:19:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.825 07:19:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:46.825 07:19:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.825 [2024-11-20 07:19:04.930419] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:46.825 [2024-11-20 07:19:04.930474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3170084 ] 00:04:46.825 [2024-11-20 07:19:05.016592] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:46.825 [2024-11-20 07:19:05.016619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.085 [2024-11-20 07:19:05.050127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.655 07:19:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:47.655 07:19:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:47.655 07:19:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3170414 00:04:47.655 07:19:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3170414 /var/tmp/spdk2.sock 00:04:47.655 07:19:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3170414 ']' 00:04:47.655 07:19:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:47.655 07:19:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.655 07:19:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:47.655 07:19:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.655 07:19:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:47.655 07:19:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.655 [2024-11-20 07:19:05.784719] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:47.655 [2024-11-20 07:19:05.784782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3170414 ] 00:04:47.915 [2024-11-20 07:19:05.872615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.915 [2024-11-20 07:19:05.930929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.486 07:19:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:48.486 07:19:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:48.486 07:19:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3170414 00:04:48.486 07:19:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3170414 00:04:48.486 07:19:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:49.056 lslocks: write error 00:04:49.056 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3170084 00:04:49.056 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3170084 ']' 00:04:49.056 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3170084 00:04:49.056 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:49.056 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:49.056 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3170084 00:04:49.056 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:49.056 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:49.056 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3170084' 00:04:49.056 killing process with pid 3170084 00:04:49.056 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3170084 00:04:49.056 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3170084 00:04:49.627 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3170414 00:04:49.627 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3170414 ']' 00:04:49.627 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3170414 00:04:49.627 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:49.627 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:49.627 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3170414 00:04:49.627 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:49.627 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:49.627 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3170414' 00:04:49.627 killing process with pid 3170414 00:04:49.627 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3170414 00:04:49.627 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3170414 00:04:49.888 00:04:49.888 real 0m2.979s 00:04:49.888 user 0m3.323s 00:04:49.888 sys 0m0.905s 00:04:49.888 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.888 07:19:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.888 ************************************ 00:04:49.888 END TEST locking_app_on_unlocked_coremask 00:04:49.888 ************************************ 00:04:49.888 07:19:07 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:49.888 07:19:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:49.888 07:19:07 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.888 07:19:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.888 ************************************ 00:04:49.888 START TEST locking_app_on_locked_coremask 00:04:49.888 ************************************ 00:04:49.888 07:19:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:04:49.888 07:19:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3170792 00:04:49.888 07:19:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3170792 /var/tmp/spdk.sock 00:04:49.888 07:19:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.888 07:19:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3170792 ']' 00:04:49.888 07:19:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.888 07:19:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:49.888 07:19:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.888 07:19:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:49.888 07:19:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.888 [2024-11-20 07:19:07.991501] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:49.888 [2024-11-20 07:19:07.991552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3170792 ] 00:04:49.888 [2024-11-20 07:19:08.076164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.149 [2024-11-20 07:19:08.108273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3171009 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3171009 /var/tmp/spdk2.sock 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3171009 /var/tmp/spdk2.sock 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3171009 /var/tmp/spdk2.sock 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3171009 ']' 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:50.721 07:19:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.721 [2024-11-20 07:19:08.834421] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:50.721 [2024-11-20 07:19:08.834477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3171009 ] 00:04:50.721 [2024-11-20 07:19:08.922074] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3170792 has claimed it. 00:04:50.721 [2024-11-20 07:19:08.922106] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:51.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3171009) - No such process 00:04:51.291 ERROR: process (pid: 3171009) is no longer running 00:04:51.291 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:51.291 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:51.291 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:51.291 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:51.291 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:51.291 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:51.291 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3170792 00:04:51.291 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3170792 00:04:51.291 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:51.552 lslocks: write error 00:04:51.552 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3170792 00:04:51.552 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3170792 ']' 00:04:51.552 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3170792 00:04:51.552 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:51.552 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:51.552 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3170792 00:04:51.552 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:51.552 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:51.552 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3170792' 00:04:51.552 killing process with pid 3170792 00:04:51.552 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3170792 00:04:51.552 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3170792 00:04:51.812 00:04:51.812 real 0m1.952s 00:04:51.812 user 0m2.242s 00:04:51.812 sys 0m0.501s 00:04:51.812 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:51.812 07:19:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.812 ************************************ 00:04:51.812 END TEST locking_app_on_locked_coremask 00:04:51.812 ************************************ 00:04:51.812 07:19:09 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:51.812 07:19:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:51.812 07:19:09 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.812 07:19:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.812 ************************************ 00:04:51.812 START TEST locking_overlapped_coremask 00:04:51.812 ************************************ 00:04:51.812 07:19:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:04:51.812 07:19:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3171167 00:04:51.812 07:19:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3171167 /var/tmp/spdk.sock 00:04:51.812 07:19:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:51.812 07:19:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3171167 ']' 00:04:51.812 07:19:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.812 07:19:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:51.812 07:19:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.812 07:19:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:51.812 07:19:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.812 [2024-11-20 07:19:10.015485] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:51.812 [2024-11-20 07:19:10.015536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3171167 ] 00:04:52.072 [2024-11-20 07:19:10.102202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:52.072 [2024-11-20 07:19:10.135468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.072 [2024-11-20 07:19:10.135585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.072 [2024-11-20 07:19:10.135587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3171501 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3171501 /var/tmp/spdk2.sock 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3171501 /var/tmp/spdk2.sock 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3171501 /var/tmp/spdk2.sock 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3171501 ']' 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:52.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:52.643 07:19:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.904 [2024-11-20 07:19:10.871167] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:52.904 [2024-11-20 07:19:10.871221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3171501 ] 00:04:52.904 [2024-11-20 07:19:10.984845] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3171167 has claimed it. 00:04:52.904 [2024-11-20 07:19:10.984886] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:53.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3171501) - No such process 00:04:53.475 ERROR: process (pid: 3171501) is no longer running 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3171167 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 3171167 ']' 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 3171167 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3171167 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3171167' 00:04:53.475 killing process with pid 3171167 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 3171167 00:04:53.475 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 3171167 00:04:53.735 00:04:53.735 real 0m1.781s 00:04:53.735 user 0m5.149s 00:04:53.735 sys 0m0.399s 00:04:53.735 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:53.735 07:19:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.735 ************************************ 00:04:53.735 END TEST locking_overlapped_coremask 00:04:53.735 ************************************ 00:04:53.735 07:19:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:53.735 07:19:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:53.735 07:19:11 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:53.735 07:19:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.735 ************************************ 00:04:53.735 START TEST locking_overlapped_coremask_via_rpc 00:04:53.735 ************************************ 00:04:53.735 07:19:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:04:53.735 07:19:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3171564 00:04:53.735 07:19:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3171564 /var/tmp/spdk.sock 00:04:53.735 07:19:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:53.735 07:19:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3171564 ']' 00:04:53.735 07:19:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.735 07:19:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:53.735 07:19:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.735 07:19:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:53.735 07:19:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.735 [2024-11-20 07:19:11.872682] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:53.736 [2024-11-20 07:19:11.872734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3171564 ] 00:04:53.996 [2024-11-20 07:19:11.959519] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:53.996 [2024-11-20 07:19:11.959542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:53.996 [2024-11-20 07:19:11.993179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.996 [2024-11-20 07:19:11.993328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.996 [2024-11-20 07:19:11.993329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.570 07:19:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:54.570 07:19:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:54.570 07:19:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3171876 00:04:54.570 07:19:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:54.570 07:19:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3171876 /var/tmp/spdk2.sock 00:04:54.570 07:19:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3171876 ']' 00:04:54.570 07:19:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:54.570 07:19:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:54.570 07:19:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:54.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:54.570 07:19:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:54.570 07:19:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.570 [2024-11-20 07:19:12.719390] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:54.570 [2024-11-20 07:19:12.719443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3171876 ] 00:04:54.831 [2024-11-20 07:19:12.809175] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:54.831 [2024-11-20 07:19:12.809196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:54.831 [2024-11-20 07:19:12.870506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.831 [2024-11-20 07:19:12.875870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.831 [2024-11-20 07:19:12.875872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.403 [2024-11-20 07:19:13.525807] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3171564 has claimed it. 00:04:55.403 request: 00:04:55.403 { 00:04:55.403 "method": "framework_enable_cpumask_locks", 00:04:55.403 "req_id": 1 00:04:55.403 } 00:04:55.403 Got JSON-RPC error response 00:04:55.403 response: 00:04:55.403 { 00:04:55.403 "code": -32603, 00:04:55.403 "message": "Failed to claim CPU core: 2" 00:04:55.403 } 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3171564 /var/tmp/spdk.sock 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3171564 ']' 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:55.403 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.664 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:55.664 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:55.664 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3171876 /var/tmp/spdk2.sock 00:04:55.664 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3171876 ']' 00:04:55.664 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.664 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:55.664 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.664 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:55.664 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.925 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:55.925 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:55.925 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:55.925 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:55.925 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:55.925 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:55.925 00:04:55.925 real 0m2.084s 00:04:55.925 user 0m0.859s 00:04:55.925 sys 0m0.156s 00:04:55.925 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:55.925 07:19:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.925 ************************************ 00:04:55.925 END TEST locking_overlapped_coremask_via_rpc 00:04:55.925 ************************************ 00:04:55.925 07:19:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:55.925 07:19:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3171564 ]] 00:04:55.925 07:19:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3171564 00:04:55.925 07:19:13 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3171564 ']' 00:04:55.925 07:19:13 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3171564 00:04:55.925 07:19:13 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:04:55.925 07:19:13 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:55.925 07:19:13 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3171564 00:04:55.925 07:19:14 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:55.925 07:19:14 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:55.925 07:19:14 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3171564' 00:04:55.925 killing process with pid 3171564 00:04:55.925 07:19:14 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3171564 00:04:55.925 07:19:14 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3171564 00:04:56.185 07:19:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3171876 ]] 00:04:56.185 07:19:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3171876 00:04:56.185 07:19:14 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3171876 ']' 00:04:56.185 07:19:14 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3171876 00:04:56.185 07:19:14 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:04:56.185 07:19:14 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:56.185 07:19:14 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3171876 00:04:56.185 07:19:14 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:56.185 07:19:14 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:56.185 07:19:14 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3171876' 00:04:56.185 killing process with pid 3171876 00:04:56.185 07:19:14 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3171876 00:04:56.185 07:19:14 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3171876 00:04:56.446 07:19:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:56.446 07:19:14 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:56.446 07:19:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3171564 ]] 00:04:56.446 07:19:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3171564 00:04:56.446 07:19:14 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3171564 ']' 00:04:56.446 07:19:14 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3171564 00:04:56.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3171564) - No such process 00:04:56.446 07:19:14 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3171564 is not found' 00:04:56.446 Process with pid 3171564 is not found 00:04:56.446 07:19:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3171876 ]] 00:04:56.446 07:19:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3171876 00:04:56.446 07:19:14 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3171876 ']' 00:04:56.446 07:19:14 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3171876 00:04:56.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3171876) - No such process 00:04:56.447 07:19:14 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3171876 is not found' 00:04:56.447 Process with pid 3171876 is not found 00:04:56.447 07:19:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:56.447 00:04:56.447 real 0m16.312s 00:04:56.447 user 0m28.532s 00:04:56.447 sys 0m4.959s 00:04:56.447 07:19:14 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:56.447 07:19:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.447 ************************************ 00:04:56.447 END TEST cpu_locks 00:04:56.447 ************************************ 00:04:56.447 00:04:56.447 real 0m42.241s 00:04:56.447 user 1m22.842s 00:04:56.447 sys 0m8.357s 00:04:56.447 07:19:14 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:56.447 07:19:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.447 ************************************ 00:04:56.447 END TEST event 00:04:56.447 ************************************ 00:04:56.447 07:19:14 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:56.447 07:19:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.447 07:19:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.447 07:19:14 -- common/autotest_common.sh@10 -- # set +x 00:04:56.447 ************************************ 00:04:56.447 START TEST thread 00:04:56.447 ************************************ 00:04:56.447 07:19:14 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:56.708 * Looking for test storage... 00:04:56.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:56.708 07:19:14 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:56.708 07:19:14 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:04:56.708 07:19:14 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:56.708 07:19:14 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:56.708 07:19:14 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.708 07:19:14 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.708 07:19:14 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.708 07:19:14 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.708 07:19:14 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.708 07:19:14 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.708 07:19:14 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.708 07:19:14 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.708 07:19:14 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.708 07:19:14 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.708 07:19:14 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.708 07:19:14 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:56.708 07:19:14 thread -- scripts/common.sh@345 -- # : 1 00:04:56.708 07:19:14 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.708 07:19:14 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.708 07:19:14 thread -- scripts/common.sh@365 -- # decimal 1 00:04:56.708 07:19:14 thread -- scripts/common.sh@353 -- # local d=1 00:04:56.708 07:19:14 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.708 07:19:14 thread -- scripts/common.sh@355 -- # echo 1 00:04:56.708 07:19:14 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.708 07:19:14 thread -- scripts/common.sh@366 -- # decimal 2 00:04:56.708 07:19:14 thread -- scripts/common.sh@353 -- # local d=2 00:04:56.708 07:19:14 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.708 07:19:14 thread -- scripts/common.sh@355 -- # echo 2 00:04:56.708 07:19:14 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.708 07:19:14 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.708 07:19:14 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.708 07:19:14 thread -- scripts/common.sh@368 -- # return 0 00:04:56.708 07:19:14 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.708 07:19:14 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:56.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.708 --rc genhtml_branch_coverage=1 00:04:56.709 --rc genhtml_function_coverage=1 00:04:56.709 --rc genhtml_legend=1 00:04:56.709 --rc geninfo_all_blocks=1 00:04:56.709 --rc geninfo_unexecuted_blocks=1 00:04:56.709 00:04:56.709 ' 00:04:56.709 07:19:14 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:56.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.709 --rc genhtml_branch_coverage=1 00:04:56.709 --rc genhtml_function_coverage=1 00:04:56.709 --rc genhtml_legend=1 00:04:56.709 --rc geninfo_all_blocks=1 00:04:56.709 --rc geninfo_unexecuted_blocks=1 00:04:56.709 00:04:56.709 ' 00:04:56.709 07:19:14 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:56.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.709 --rc genhtml_branch_coverage=1 00:04:56.709 --rc genhtml_function_coverage=1 00:04:56.709 --rc genhtml_legend=1 00:04:56.709 --rc geninfo_all_blocks=1 00:04:56.709 --rc geninfo_unexecuted_blocks=1 00:04:56.709 00:04:56.709 ' 00:04:56.709 07:19:14 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:56.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.709 --rc genhtml_branch_coverage=1 00:04:56.709 --rc genhtml_function_coverage=1 00:04:56.709 --rc genhtml_legend=1 00:04:56.709 --rc geninfo_all_blocks=1 00:04:56.709 --rc geninfo_unexecuted_blocks=1 00:04:56.709 00:04:56.709 ' 00:04:56.709 07:19:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:56.709 07:19:14 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:04:56.709 07:19:14 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.709 07:19:14 thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.709 ************************************ 00:04:56.709 START TEST thread_poller_perf 00:04:56.709 ************************************ 00:04:56.709 07:19:14 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:56.709 [2024-11-20 07:19:14.833904] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:56.709 [2024-11-20 07:19:14.834003] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3172326 ] 00:04:56.969 [2024-11-20 07:19:14.920395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.969 [2024-11-20 07:19:14.961388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.969 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:57.909 [2024-11-20T06:19:16.119Z] ====================================== 00:04:57.909 [2024-11-20T06:19:16.119Z] busy:2408061310 (cyc) 00:04:57.909 [2024-11-20T06:19:16.119Z] total_run_count: 419000 00:04:57.909 [2024-11-20T06:19:16.119Z] tsc_hz: 2400000000 (cyc) 00:04:57.909 [2024-11-20T06:19:16.119Z] ====================================== 00:04:57.909 [2024-11-20T06:19:16.119Z] poller_cost: 5747 (cyc), 2394 (nsec) 00:04:57.909 00:04:57.909 real 0m1.184s 00:04:57.909 user 0m1.095s 00:04:57.909 sys 0m0.084s 00:04:57.909 07:19:15 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.909 07:19:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:57.909 ************************************ 00:04:57.909 END TEST thread_poller_perf 00:04:57.909 ************************************ 00:04:57.909 07:19:16 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:57.909 07:19:16 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:04:57.909 07:19:16 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:57.909 07:19:16 thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.909 ************************************ 00:04:57.909 START TEST thread_poller_perf 00:04:57.909 ************************************ 00:04:57.909 07:19:16 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:57.909 [2024-11-20 07:19:16.087136] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:57.909 [2024-11-20 07:19:16.087240] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3172676 ] 00:04:58.169 [2024-11-20 07:19:16.173485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.169 [2024-11-20 07:19:16.207219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.169 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:59.110 [2024-11-20T06:19:17.320Z] ====================================== 00:04:59.110 [2024-11-20T06:19:17.320Z] busy:2401619032 (cyc) 00:04:59.110 [2024-11-20T06:19:17.320Z] total_run_count: 5555000 00:04:59.110 [2024-11-20T06:19:17.320Z] tsc_hz: 2400000000 (cyc) 00:04:59.110 [2024-11-20T06:19:17.320Z] ====================================== 00:04:59.110 [2024-11-20T06:19:17.320Z] poller_cost: 432 (cyc), 180 (nsec) 00:04:59.110 00:04:59.110 real 0m1.168s 00:04:59.110 user 0m1.086s 00:04:59.110 sys 0m0.079s 00:04:59.110 07:19:17 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.110 07:19:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:59.110 ************************************ 00:04:59.110 END TEST thread_poller_perf 00:04:59.110 ************************************ 00:04:59.110 07:19:17 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:59.110 00:04:59.110 real 0m2.692s 00:04:59.110 user 0m2.338s 00:04:59.110 sys 0m0.368s 00:04:59.110 07:19:17 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.110 07:19:17 thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.110 ************************************ 00:04:59.110 END TEST thread 00:04:59.110 ************************************ 00:04:59.110 07:19:17 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:59.110 07:19:17 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:59.110 07:19:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.110 07:19:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.110 07:19:17 -- common/autotest_common.sh@10 -- # set +x 00:04:59.383 ************************************ 00:04:59.383 START TEST app_cmdline 00:04:59.383 ************************************ 00:04:59.383 07:19:17 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:59.383 * Looking for test storage... 00:04:59.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:59.383 07:19:17 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:59.383 07:19:17 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:04:59.383 07:19:17 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:59.383 07:19:17 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.383 07:19:17 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:59.384 07:19:17 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.384 07:19:17 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.384 07:19:17 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.384 07:19:17 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:59.384 07:19:17 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.384 07:19:17 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:59.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.384 --rc genhtml_branch_coverage=1 00:04:59.384 --rc genhtml_function_coverage=1 00:04:59.384 --rc genhtml_legend=1 00:04:59.384 --rc geninfo_all_blocks=1 00:04:59.384 --rc geninfo_unexecuted_blocks=1 00:04:59.384 00:04:59.384 ' 00:04:59.384 07:19:17 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:59.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.384 --rc genhtml_branch_coverage=1 00:04:59.384 --rc genhtml_function_coverage=1 00:04:59.384 --rc genhtml_legend=1 00:04:59.384 --rc geninfo_all_blocks=1 00:04:59.384 --rc geninfo_unexecuted_blocks=1 00:04:59.384 00:04:59.384 ' 00:04:59.384 07:19:17 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:59.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.384 --rc genhtml_branch_coverage=1 00:04:59.384 --rc genhtml_function_coverage=1 00:04:59.384 --rc genhtml_legend=1 00:04:59.384 --rc geninfo_all_blocks=1 00:04:59.384 --rc geninfo_unexecuted_blocks=1 00:04:59.384 00:04:59.384 ' 00:04:59.384 07:19:17 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:59.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.384 --rc genhtml_branch_coverage=1 00:04:59.384 --rc genhtml_function_coverage=1 00:04:59.384 --rc genhtml_legend=1 00:04:59.384 --rc geninfo_all_blocks=1 00:04:59.384 --rc geninfo_unexecuted_blocks=1 00:04:59.384 00:04:59.384 ' 00:04:59.384 07:19:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:59.384 07:19:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3173077 00:04:59.384 07:19:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3173077 00:04:59.385 07:19:17 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:59.385 07:19:17 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 3173077 ']' 00:04:59.385 07:19:17 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.385 07:19:17 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:59.385 07:19:17 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.385 07:19:17 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:59.385 07:19:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:59.650 [2024-11-20 07:19:17.608914] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:04:59.650 [2024-11-20 07:19:17.608970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3173077 ] 00:04:59.650 [2024-11-20 07:19:17.696386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.650 [2024-11-20 07:19:17.727289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.222 07:19:18 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:00.222 07:19:18 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:00.222 07:19:18 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:00.483 { 00:05:00.483 "version": "SPDK v25.01-pre git sha1 12962b97e", 00:05:00.483 "fields": { 00:05:00.483 "major": 25, 00:05:00.483 "minor": 1, 00:05:00.483 "patch": 0, 00:05:00.483 "suffix": "-pre", 00:05:00.483 "commit": "12962b97e" 00:05:00.483 } 00:05:00.483 } 00:05:00.483 07:19:18 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:00.483 07:19:18 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:00.483 07:19:18 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:00.483 07:19:18 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:00.483 07:19:18 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:00.483 07:19:18 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:00.483 07:19:18 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.483 07:19:18 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:00.483 07:19:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:00.483 07:19:18 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.483 07:19:18 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:00.483 07:19:18 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:00.484 07:19:18 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:00.484 07:19:18 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:00.484 07:19:18 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:00.484 07:19:18 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:00.484 07:19:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.484 07:19:18 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:00.484 07:19:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.484 07:19:18 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:00.484 07:19:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.484 07:19:18 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:00.484 07:19:18 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:00.484 07:19:18 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:00.744 request: 00:05:00.744 { 00:05:00.744 "method": "env_dpdk_get_mem_stats", 00:05:00.744 "req_id": 1 00:05:00.744 } 00:05:00.744 Got JSON-RPC error response 00:05:00.744 response: 00:05:00.744 { 00:05:00.744 "code": -32601, 00:05:00.744 "message": "Method not found" 00:05:00.744 } 00:05:00.744 07:19:18 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:00.744 07:19:18 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:00.744 07:19:18 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:00.744 07:19:18 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:00.744 07:19:18 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3173077 00:05:00.744 07:19:18 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 3173077 ']' 00:05:00.744 07:19:18 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 3173077 00:05:00.744 07:19:18 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:00.744 07:19:18 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:00.744 07:19:18 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3173077 00:05:00.744 07:19:18 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:00.744 07:19:18 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:00.744 07:19:18 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3173077' 00:05:00.744 killing process with pid 3173077 00:05:00.744 07:19:18 app_cmdline -- common/autotest_common.sh@971 -- # kill 3173077 00:05:00.744 07:19:18 app_cmdline -- common/autotest_common.sh@976 -- # wait 3173077 00:05:01.005 00:05:01.005 real 0m1.708s 00:05:01.005 user 0m2.075s 00:05:01.005 sys 0m0.436s 00:05:01.005 07:19:19 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.005 07:19:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:01.005 ************************************ 00:05:01.005 END TEST app_cmdline 00:05:01.005 ************************************ 00:05:01.005 07:19:19 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:01.005 07:19:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:01.005 07:19:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.005 07:19:19 -- common/autotest_common.sh@10 -- # set +x 00:05:01.005 ************************************ 00:05:01.005 START TEST version 00:05:01.005 ************************************ 00:05:01.005 07:19:19 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:01.264 * Looking for test storage... 00:05:01.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:01.264 07:19:19 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:01.264 07:19:19 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:01.264 07:19:19 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:01.264 07:19:19 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:01.264 07:19:19 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.264 07:19:19 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.264 07:19:19 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.264 07:19:19 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.264 07:19:19 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.264 07:19:19 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.264 07:19:19 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.264 07:19:19 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.264 07:19:19 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.264 07:19:19 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.264 07:19:19 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.264 07:19:19 version -- scripts/common.sh@344 -- # case "$op" in 00:05:01.264 07:19:19 version -- scripts/common.sh@345 -- # : 1 00:05:01.264 07:19:19 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.264 07:19:19 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.265 07:19:19 version -- scripts/common.sh@365 -- # decimal 1 00:05:01.265 07:19:19 version -- scripts/common.sh@353 -- # local d=1 00:05:01.265 07:19:19 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.265 07:19:19 version -- scripts/common.sh@355 -- # echo 1 00:05:01.265 07:19:19 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.265 07:19:19 version -- scripts/common.sh@366 -- # decimal 2 00:05:01.265 07:19:19 version -- scripts/common.sh@353 -- # local d=2 00:05:01.265 07:19:19 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.265 07:19:19 version -- scripts/common.sh@355 -- # echo 2 00:05:01.265 07:19:19 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.265 07:19:19 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.265 07:19:19 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.265 07:19:19 version -- scripts/common.sh@368 -- # return 0 00:05:01.265 07:19:19 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.265 07:19:19 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:01.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.265 --rc genhtml_branch_coverage=1 00:05:01.265 --rc genhtml_function_coverage=1 00:05:01.265 --rc genhtml_legend=1 00:05:01.265 --rc geninfo_all_blocks=1 00:05:01.265 --rc geninfo_unexecuted_blocks=1 00:05:01.265 00:05:01.265 ' 00:05:01.265 07:19:19 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:01.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.265 --rc genhtml_branch_coverage=1 00:05:01.265 --rc genhtml_function_coverage=1 00:05:01.265 --rc genhtml_legend=1 00:05:01.265 --rc geninfo_all_blocks=1 00:05:01.265 --rc geninfo_unexecuted_blocks=1 00:05:01.265 00:05:01.265 ' 00:05:01.265 07:19:19 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:01.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.265 --rc genhtml_branch_coverage=1 00:05:01.265 --rc genhtml_function_coverage=1 00:05:01.265 --rc genhtml_legend=1 00:05:01.265 --rc geninfo_all_blocks=1 00:05:01.265 --rc geninfo_unexecuted_blocks=1 00:05:01.265 00:05:01.265 ' 00:05:01.265 07:19:19 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:01.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.265 --rc genhtml_branch_coverage=1 00:05:01.265 --rc genhtml_function_coverage=1 00:05:01.265 --rc genhtml_legend=1 00:05:01.265 --rc geninfo_all_blocks=1 00:05:01.265 --rc geninfo_unexecuted_blocks=1 00:05:01.265 00:05:01.265 ' 00:05:01.265 07:19:19 version -- app/version.sh@17 -- # get_header_version major 00:05:01.265 07:19:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:01.265 07:19:19 version -- app/version.sh@14 -- # cut -f2 00:05:01.265 07:19:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:01.265 07:19:19 version -- app/version.sh@17 -- # major=25 00:05:01.265 07:19:19 version -- app/version.sh@18 -- # get_header_version minor 00:05:01.265 07:19:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:01.265 07:19:19 version -- app/version.sh@14 -- # cut -f2 00:05:01.265 07:19:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:01.265 07:19:19 version -- app/version.sh@18 -- # minor=1 00:05:01.265 07:19:19 version -- app/version.sh@19 -- # get_header_version patch 00:05:01.265 07:19:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:01.265 07:19:19 version -- app/version.sh@14 -- # cut -f2 00:05:01.265 07:19:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:01.265 07:19:19 version -- app/version.sh@19 -- # patch=0 00:05:01.265 07:19:19 version -- app/version.sh@20 -- # get_header_version suffix 00:05:01.265 07:19:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:01.265 07:19:19 version -- app/version.sh@14 -- # cut -f2 00:05:01.265 07:19:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:01.265 07:19:19 version -- app/version.sh@20 -- # suffix=-pre 00:05:01.265 07:19:19 version -- app/version.sh@22 -- # version=25.1 00:05:01.265 07:19:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:01.265 07:19:19 version -- app/version.sh@28 -- # version=25.1rc0 00:05:01.265 07:19:19 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:01.265 07:19:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:01.265 07:19:19 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:01.265 07:19:19 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:01.265 00:05:01.265 real 0m0.273s 00:05:01.265 user 0m0.166s 00:05:01.265 sys 0m0.156s 00:05:01.265 07:19:19 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.265 07:19:19 version -- common/autotest_common.sh@10 -- # set +x 00:05:01.265 ************************************ 00:05:01.265 END TEST version 00:05:01.265 ************************************ 00:05:01.265 07:19:19 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:01.265 07:19:19 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:01.265 07:19:19 -- spdk/autotest.sh@194 -- # uname -s 00:05:01.265 07:19:19 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:01.265 07:19:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:01.265 07:19:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:01.265 07:19:19 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:01.265 07:19:19 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:01.265 07:19:19 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:01.265 07:19:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.265 07:19:19 -- common/autotest_common.sh@10 -- # set +x 00:05:01.525 07:19:19 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:01.525 07:19:19 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:01.525 07:19:19 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:01.525 07:19:19 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:01.525 07:19:19 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:01.525 07:19:19 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:01.525 07:19:19 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:01.525 07:19:19 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:01.525 07:19:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.525 07:19:19 -- common/autotest_common.sh@10 -- # set +x 00:05:01.525 ************************************ 00:05:01.525 START TEST nvmf_tcp 00:05:01.525 ************************************ 00:05:01.525 07:19:19 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:01.525 * Looking for test storage... 00:05:01.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:01.525 07:19:19 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:01.525 07:19:19 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:01.525 07:19:19 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:01.525 07:19:19 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:01.525 07:19:19 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.526 07:19:19 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:01.526 07:19:19 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:01.526 07:19:19 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.526 07:19:19 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:01.787 07:19:19 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.787 07:19:19 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.787 07:19:19 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.787 07:19:19 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:01.787 07:19:19 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.787 07:19:19 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:01.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.787 --rc genhtml_branch_coverage=1 00:05:01.787 --rc genhtml_function_coverage=1 00:05:01.787 --rc genhtml_legend=1 00:05:01.787 --rc geninfo_all_blocks=1 00:05:01.787 --rc geninfo_unexecuted_blocks=1 00:05:01.787 00:05:01.787 ' 00:05:01.787 07:19:19 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:01.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.787 --rc genhtml_branch_coverage=1 00:05:01.787 --rc genhtml_function_coverage=1 00:05:01.787 --rc genhtml_legend=1 00:05:01.787 --rc geninfo_all_blocks=1 00:05:01.787 --rc geninfo_unexecuted_blocks=1 00:05:01.787 00:05:01.787 ' 00:05:01.787 07:19:19 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:01.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.787 --rc genhtml_branch_coverage=1 00:05:01.787 --rc genhtml_function_coverage=1 00:05:01.787 --rc genhtml_legend=1 00:05:01.787 --rc geninfo_all_blocks=1 00:05:01.787 --rc geninfo_unexecuted_blocks=1 00:05:01.787 00:05:01.787 ' 00:05:01.787 07:19:19 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:01.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.787 --rc genhtml_branch_coverage=1 00:05:01.787 --rc genhtml_function_coverage=1 00:05:01.787 --rc genhtml_legend=1 00:05:01.787 --rc geninfo_all_blocks=1 00:05:01.787 --rc geninfo_unexecuted_blocks=1 00:05:01.787 00:05:01.787 ' 00:05:01.787 07:19:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:01.787 07:19:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:01.787 07:19:19 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:01.787 07:19:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:01.787 07:19:19 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.787 07:19:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.787 ************************************ 00:05:01.787 START TEST nvmf_target_core 00:05:01.787 ************************************ 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:01.787 * Looking for test storage... 00:05:01.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:01.787 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:01.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.788 --rc genhtml_branch_coverage=1 00:05:01.788 --rc genhtml_function_coverage=1 00:05:01.788 --rc genhtml_legend=1 00:05:01.788 --rc geninfo_all_blocks=1 00:05:01.788 --rc geninfo_unexecuted_blocks=1 00:05:01.788 00:05:01.788 ' 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:01.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.788 --rc genhtml_branch_coverage=1 00:05:01.788 --rc genhtml_function_coverage=1 00:05:01.788 --rc genhtml_legend=1 00:05:01.788 --rc geninfo_all_blocks=1 00:05:01.788 --rc geninfo_unexecuted_blocks=1 00:05:01.788 00:05:01.788 ' 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:01.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.788 --rc genhtml_branch_coverage=1 00:05:01.788 --rc genhtml_function_coverage=1 00:05:01.788 --rc genhtml_legend=1 00:05:01.788 --rc geninfo_all_blocks=1 00:05:01.788 --rc geninfo_unexecuted_blocks=1 00:05:01.788 00:05:01.788 ' 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:01.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.788 --rc genhtml_branch_coverage=1 00:05:01.788 --rc genhtml_function_coverage=1 00:05:01.788 --rc genhtml_legend=1 00:05:01.788 --rc geninfo_all_blocks=1 00:05:01.788 --rc geninfo_unexecuted_blocks=1 00:05:01.788 00:05:01.788 ' 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.788 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.049 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:02.049 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:02.049 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.049 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.049 07:19:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:02.049 07:19:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:02.050 ************************************ 00:05:02.050 START TEST nvmf_abort 00:05:02.050 ************************************ 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:02.050 * Looking for test storage... 00:05:02.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:02.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.050 --rc genhtml_branch_coverage=1 00:05:02.050 --rc genhtml_function_coverage=1 00:05:02.050 --rc genhtml_legend=1 00:05:02.050 --rc geninfo_all_blocks=1 00:05:02.050 --rc geninfo_unexecuted_blocks=1 00:05:02.050 00:05:02.050 ' 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:02.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.050 --rc genhtml_branch_coverage=1 00:05:02.050 --rc genhtml_function_coverage=1 00:05:02.050 --rc genhtml_legend=1 00:05:02.050 --rc geninfo_all_blocks=1 00:05:02.050 --rc geninfo_unexecuted_blocks=1 00:05:02.050 00:05:02.050 ' 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:02.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.050 --rc genhtml_branch_coverage=1 00:05:02.050 --rc genhtml_function_coverage=1 00:05:02.050 --rc genhtml_legend=1 00:05:02.050 --rc geninfo_all_blocks=1 00:05:02.050 --rc geninfo_unexecuted_blocks=1 00:05:02.050 00:05:02.050 ' 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:02.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.050 --rc genhtml_branch_coverage=1 00:05:02.050 --rc genhtml_function_coverage=1 00:05:02.050 --rc genhtml_legend=1 00:05:02.050 --rc geninfo_all_blocks=1 00:05:02.050 --rc geninfo_unexecuted_blocks=1 00:05:02.050 00:05:02.050 ' 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:02.050 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.311 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:02.312 07:19:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:10.455 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:10.455 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:10.455 Found net devices under 0000:31:00.0: cvl_0_0 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:10.455 Found net devices under 0000:31:00.1: cvl_0_1 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:10.455 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:10.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:10.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:05:10.456 00:05:10.456 --- 10.0.0.2 ping statistics --- 00:05:10.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:10.456 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:10.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:10.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:05:10.456 00:05:10.456 --- 10.0.0.1 ping statistics --- 00:05:10.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:10.456 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3177519 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3177519 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3177519 ']' 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:10.456 07:19:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.456 [2024-11-20 07:19:27.965059] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:05:10.456 [2024-11-20 07:19:27.965122] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:10.456 [2024-11-20 07:19:28.064455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:10.456 [2024-11-20 07:19:28.119108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:10.456 [2024-11-20 07:19:28.119158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:10.456 [2024-11-20 07:19:28.119167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:10.456 [2024-11-20 07:19:28.119174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:10.456 [2024-11-20 07:19:28.119180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:10.456 [2024-11-20 07:19:28.121304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.456 [2024-11-20 07:19:28.121460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.456 [2024-11-20 07:19:28.121461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.717 [2024-11-20 07:19:28.845035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.717 Malloc0 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.717 Delay0 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.717 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.978 [2024-11-20 07:19:28.929008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:10.978 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.978 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:10.978 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.978 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.978 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.978 07:19:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:10.978 [2024-11-20 07:19:29.077432] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:13.521 Initializing NVMe Controllers 00:05:13.521 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:13.521 controller IO queue size 128 less than required 00:05:13.521 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:13.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:13.521 Initialization complete. Launching workers. 00:05:13.521 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28269 00:05:13.521 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28334, failed to submit 62 00:05:13.521 success 28273, unsuccessful 61, failed 0 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:13.521 rmmod nvme_tcp 00:05:13.521 rmmod nvme_fabrics 00:05:13.521 rmmod nvme_keyring 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3177519 ']' 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3177519 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3177519 ']' 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3177519 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3177519 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3177519' 00:05:13.521 killing process with pid 3177519 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3177519 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3177519 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:13.521 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:15.434 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:15.434 00:05:15.434 real 0m13.544s 00:05:15.434 user 0m14.290s 00:05:15.434 sys 0m6.674s 00:05:15.434 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.434 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:15.434 ************************************ 00:05:15.434 END TEST nvmf_abort 00:05:15.434 ************************************ 00:05:15.434 07:19:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:15.434 07:19:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:15.695 ************************************ 00:05:15.695 START TEST nvmf_ns_hotplug_stress 00:05:15.695 ************************************ 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:15.695 * Looking for test storage... 00:05:15.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:15.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.695 --rc genhtml_branch_coverage=1 00:05:15.695 --rc genhtml_function_coverage=1 00:05:15.695 --rc genhtml_legend=1 00:05:15.695 --rc geninfo_all_blocks=1 00:05:15.695 --rc geninfo_unexecuted_blocks=1 00:05:15.695 00:05:15.695 ' 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:15.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.695 --rc genhtml_branch_coverage=1 00:05:15.695 --rc genhtml_function_coverage=1 00:05:15.695 --rc genhtml_legend=1 00:05:15.695 --rc geninfo_all_blocks=1 00:05:15.695 --rc geninfo_unexecuted_blocks=1 00:05:15.695 00:05:15.695 ' 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:15.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.695 --rc genhtml_branch_coverage=1 00:05:15.695 --rc genhtml_function_coverage=1 00:05:15.695 --rc genhtml_legend=1 00:05:15.695 --rc geninfo_all_blocks=1 00:05:15.695 --rc geninfo_unexecuted_blocks=1 00:05:15.695 00:05:15.695 ' 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:15.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.695 --rc genhtml_branch_coverage=1 00:05:15.695 --rc genhtml_function_coverage=1 00:05:15.695 --rc genhtml_legend=1 00:05:15.695 --rc geninfo_all_blocks=1 00:05:15.695 --rc geninfo_unexecuted_blocks=1 00:05:15.695 00:05:15.695 ' 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.695 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.696 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.696 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.696 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.696 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.696 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.696 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:15.696 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:15.696 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.696 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.696 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:15.696 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:15.696 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:15.696 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:15.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:15.956 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:24.096 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:24.097 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:24.097 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:24.097 Found net devices under 0000:31:00.0: cvl_0_0 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:24.097 Found net devices under 0000:31:00.1: cvl_0_1 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:24.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:24.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:05:24.097 00:05:24.097 --- 10.0.0.2 ping statistics --- 00:05:24.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:24.097 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:24.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:24.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:05:24.097 00:05:24.097 --- 10.0.0.1 ping statistics --- 00:05:24.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:24.097 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3182406 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3182406 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3182406 ']' 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:24.097 07:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:24.097 [2024-11-20 07:19:41.596704] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:05:24.097 [2024-11-20 07:19:41.596779] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:24.097 [2024-11-20 07:19:41.700174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.098 [2024-11-20 07:19:41.751967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:24.098 [2024-11-20 07:19:41.752019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:24.098 [2024-11-20 07:19:41.752028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:24.098 [2024-11-20 07:19:41.752035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:24.098 [2024-11-20 07:19:41.752042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:24.098 [2024-11-20 07:19:41.753922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.098 [2024-11-20 07:19:41.754081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.098 [2024-11-20 07:19:41.754082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.359 07:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:24.359 07:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:05:24.359 07:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:24.359 07:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:24.359 07:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:24.359 07:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:24.359 07:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:24.359 07:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:24.620 [2024-11-20 07:19:42.617251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.620 07:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:24.881 07:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:24.881 [2024-11-20 07:19:42.996247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:24.881 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:25.142 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:25.402 Malloc0 00:05:25.403 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:25.403 Delay0 00:05:25.664 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.664 07:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:25.924 NULL1 00:05:25.924 07:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:26.184 07:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3183056 00:05:26.185 07:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:26.185 07:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:26.185 07:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.445 07:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.445 07:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:26.445 07:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:26.705 true 00:05:26.705 07:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:26.705 07:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.965 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.225 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:27.225 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:27.225 true 00:05:27.225 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:27.225 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.485 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.746 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:27.746 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:27.746 true 00:05:27.746 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:27.746 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.006 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.266 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:28.266 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:28.266 true 00:05:28.526 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:28.526 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.526 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.787 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:28.787 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:29.046 true 00:05:29.046 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:29.046 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.046 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.306 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:29.306 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:29.565 true 00:05:29.565 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:29.565 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.825 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.825 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:29.825 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:30.150 true 00:05:30.150 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:30.150 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.150 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.410 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:30.410 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:30.670 true 00:05:30.670 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:30.670 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.929 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.929 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:30.929 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:31.193 true 00:05:31.193 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:31.193 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.452 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.452 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:31.452 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:31.713 true 00:05:31.713 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:31.713 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.974 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.234 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:32.234 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:32.234 true 00:05:32.234 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:32.234 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.493 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.783 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:32.783 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:32.783 true 00:05:32.783 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:32.783 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.064 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.369 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:33.369 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:33.369 true 00:05:33.369 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:33.369 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.629 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.629 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:33.629 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:33.890 true 00:05:33.890 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:33.890 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.150 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.410 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:34.410 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:34.410 true 00:05:34.410 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:34.410 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.671 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.931 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:34.931 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:34.931 true 00:05:34.931 07:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:34.931 07:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.192 07:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.453 07:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:35.453 07:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:35.453 true 00:05:35.714 07:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:35.714 07:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.714 07:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.974 07:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:35.974 07:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:36.235 true 00:05:36.235 07:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:36.235 07:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.235 07:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.495 07:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:36.496 07:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:36.756 true 00:05:36.756 07:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:36.756 07:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.018 07:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.018 07:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:37.018 07:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:37.279 true 00:05:37.279 07:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:37.279 07:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.540 07:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.540 07:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:37.540 07:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:37.800 true 00:05:37.800 07:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:37.800 07:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.061 07:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.322 07:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:38.322 07:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:38.322 true 00:05:38.322 07:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:38.322 07:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.583 07:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.845 07:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:38.845 07:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:38.845 true 00:05:38.845 07:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:38.845 07:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.106 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.367 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:39.367 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:39.367 true 00:05:39.367 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:39.367 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.628 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.889 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:39.889 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:39.889 true 00:05:40.149 07:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:40.150 07:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.150 07:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.411 07:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:40.411 07:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:40.411 true 00:05:40.672 07:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:40.672 07:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.672 07:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.933 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:40.933 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:41.192 true 00:05:41.192 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:41.192 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.192 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.453 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:41.453 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:41.714 true 00:05:41.714 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:41.714 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.975 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.975 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:41.975 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:42.236 true 00:05:42.236 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:42.236 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.497 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.497 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:42.497 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:42.758 true 00:05:42.758 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:42.758 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.018 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.279 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:43.279 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:43.279 true 00:05:43.279 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:43.279 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.539 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.800 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:43.800 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:43.800 true 00:05:43.800 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:43.800 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.061 07:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.322 07:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:44.322 07:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:44.322 true 00:05:44.583 07:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:44.583 07:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.583 07:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.843 07:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:44.843 07:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:45.103 true 00:05:45.103 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:45.103 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.103 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.364 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:45.364 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:45.625 true 00:05:45.625 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:45.625 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.625 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.884 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:45.884 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:46.145 true 00:05:46.145 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:46.145 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.406 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.406 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:46.406 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:46.666 true 00:05:46.666 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:46.666 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.927 07:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.927 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:46.927 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:47.189 true 00:05:47.189 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:47.189 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.450 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.710 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:47.710 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:47.710 true 00:05:47.710 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:47.710 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.971 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.233 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:48.233 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:48.233 true 00:05:48.233 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:48.233 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.494 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.754 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:48.754 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:48.754 true 00:05:48.754 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:48.755 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.016 07:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.277 07:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:49.277 07:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:49.277 true 00:05:49.537 07:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:49.537 07:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.537 07:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.798 07:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:49.798 07:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:50.121 true 00:05:50.121 07:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:50.121 07:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.121 07:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.381 07:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:50.381 07:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:50.381 true 00:05:50.641 07:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:50.641 07:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.641 07:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.901 07:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:50.901 07:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:51.162 true 00:05:51.162 07:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:51.162 07:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.162 07:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.422 07:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:05:51.422 07:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:05:51.682 true 00:05:51.682 07:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:51.682 07:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.942 07:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.942 07:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:05:51.942 07:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:05:52.202 true 00:05:52.202 07:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:52.202 07:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.464 07:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.464 07:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:05:52.464 07:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:05:52.724 true 00:05:52.724 07:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:52.724 07:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.985 07:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.985 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:05:52.985 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:05:53.245 true 00:05:53.245 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:53.245 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.504 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.765 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:05:53.765 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:05:53.765 true 00:05:53.765 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:53.765 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.026 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.287 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:05:54.287 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:05:54.287 true 00:05:54.287 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:54.287 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.546 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.806 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:05:54.806 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:05:55.066 true 00:05:55.066 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:55.066 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.066 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.326 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:05:55.326 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:05:55.587 true 00:05:55.587 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:55.587 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.849 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.849 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:05:55.849 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:05:56.110 true 00:05:56.110 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:56.110 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.371 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.371 Initializing NVMe Controllers 00:05:56.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:56.371 Controller IO queue size 128, less than required. 00:05:56.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:56.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:56.371 Initialization complete. Launching workers. 00:05:56.371 ======================================================== 00:05:56.371 Latency(us) 00:05:56.371 Device Information : IOPS MiB/s Average min max 00:05:56.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31326.07 15.30 4085.92 1663.78 44691.79 00:05:56.371 ======================================================== 00:05:56.371 Total : 31326.07 15.30 4085.92 1663.78 44691.79 00:05:56.371 00:05:56.371 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:05:56.371 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:05:56.631 true 00:05:56.631 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3183056 00:05:56.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3183056) - No such process 00:05:56.631 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3183056 00:05:56.631 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.892 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:56.892 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:56.892 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:56.892 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:56.892 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:56.892 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:57.152 null0 00:05:57.152 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:57.152 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:57.152 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:57.413 null1 00:05:57.413 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:57.413 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:57.413 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:57.413 null2 00:05:57.674 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:57.674 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:57.674 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:57.674 null3 00:05:57.674 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:57.674 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:57.674 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:57.935 null4 00:05:57.935 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:57.935 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:57.935 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:58.195 null5 00:05:58.195 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:58.195 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:58.195 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:58.195 null6 00:05:58.195 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:58.195 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:58.195 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:58.456 null7 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3189628 3189630 3189631 3189633 3189635 3189637 3189638 3189640 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.456 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:58.718 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.718 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:58.718 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:58.718 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:58.718 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:58.718 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:58.718 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:58.718 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:58.718 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.718 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.718 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:58.978 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.978 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.978 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:58.978 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.979 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.979 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:58.979 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.979 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.979 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:58.979 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.979 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.979 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:58.979 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.979 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.979 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:58.979 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.979 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.979 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.979 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.979 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:58.979 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:58.979 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.979 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:58.979 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:58.979 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:58.979 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:58.979 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.239 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:59.499 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.499 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:59.499 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:59.499 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:59.499 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:59.499 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:59.499 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:59.499 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:59.499 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.499 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.499 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:59.499 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.499 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.500 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:59.500 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.500 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.500 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:59.500 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.500 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.500 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:59.500 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.500 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.500 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:59.500 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.500 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.500 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:59.760 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.760 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.760 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:59.760 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.760 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.760 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:59.760 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.760 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:59.760 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:59.760 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:59.760 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:59.761 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:59.761 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:00.021 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:00.021 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.021 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.021 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:00.021 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.021 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.021 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:00.021 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.280 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.543 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:00.803 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:00.803 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.064 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.324 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.584 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.584 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.584 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.584 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.584 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.584 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.584 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.584 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.584 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.584 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.584 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.584 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:01.585 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:01.585 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.585 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.585 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.585 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.585 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.843 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.843 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.843 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.843 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.843 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.843 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.843 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.843 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.844 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.844 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.844 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.844 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.844 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.844 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.844 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.844 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.844 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.844 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.844 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.844 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.844 07:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.844 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:01.844 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.844 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.844 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.844 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.103 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.103 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.104 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.104 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.104 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.104 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.104 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.104 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.104 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.104 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.104 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.104 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.104 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.104 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.104 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.104 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.104 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.104 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.364 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.364 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.364 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.364 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.364 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.364 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.364 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:02.364 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:02.364 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:02.364 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:02.364 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:02.364 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:02.365 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:02.365 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:02.365 rmmod nvme_tcp 00:06:02.365 rmmod nvme_fabrics 00:06:02.365 rmmod nvme_keyring 00:06:02.365 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:02.365 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:02.365 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:02.365 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3182406 ']' 00:06:02.365 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3182406 00:06:02.365 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3182406 ']' 00:06:02.365 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3182406 00:06:02.365 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:02.365 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:02.365 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3182406 00:06:02.365 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:02.365 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:02.365 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3182406' 00:06:02.365 killing process with pid 3182406 00:06:02.365 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3182406 00:06:02.365 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3182406 00:06:02.625 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:02.625 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:02.625 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:02.625 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:02.625 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:02.625 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:02.625 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:02.625 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:02.625 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:02.625 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:02.625 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:02.625 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:04.537 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:04.537 00:06:04.537 real 0m49.051s 00:06:04.537 user 3m18.976s 00:06:04.537 sys 0m17.406s 00:06:04.537 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:04.537 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.537 ************************************ 00:06:04.537 END TEST nvmf_ns_hotplug_stress 00:06:04.538 ************************************ 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:04.799 ************************************ 00:06:04.799 START TEST nvmf_delete_subsystem 00:06:04.799 ************************************ 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:04.799 * Looking for test storage... 00:06:04.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.799 07:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.799 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:05.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.060 --rc genhtml_branch_coverage=1 00:06:05.060 --rc genhtml_function_coverage=1 00:06:05.060 --rc genhtml_legend=1 00:06:05.060 --rc geninfo_all_blocks=1 00:06:05.060 --rc geninfo_unexecuted_blocks=1 00:06:05.060 00:06:05.060 ' 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:05.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.060 --rc genhtml_branch_coverage=1 00:06:05.060 --rc genhtml_function_coverage=1 00:06:05.060 --rc genhtml_legend=1 00:06:05.060 --rc geninfo_all_blocks=1 00:06:05.060 --rc geninfo_unexecuted_blocks=1 00:06:05.060 00:06:05.060 ' 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:05.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.060 --rc genhtml_branch_coverage=1 00:06:05.060 --rc genhtml_function_coverage=1 00:06:05.060 --rc genhtml_legend=1 00:06:05.060 --rc geninfo_all_blocks=1 00:06:05.060 --rc geninfo_unexecuted_blocks=1 00:06:05.060 00:06:05.060 ' 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:05.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.060 --rc genhtml_branch_coverage=1 00:06:05.060 --rc genhtml_function_coverage=1 00:06:05.060 --rc genhtml_legend=1 00:06:05.060 --rc geninfo_all_blocks=1 00:06:05.060 --rc geninfo_unexecuted_blocks=1 00:06:05.060 00:06:05.060 ' 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.060 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:05.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:05.061 07:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:13.197 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:13.197 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:13.197 Found net devices under 0000:31:00.0: cvl_0_0 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:13.197 Found net devices under 0000:31:00.1: cvl_0_1 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:13.197 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:13.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:13.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:06:13.198 00:06:13.198 --- 10.0.0.2 ping statistics --- 00:06:13.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.198 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:13.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:13.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:06:13.198 00:06:13.198 --- 10.0.0.1 ping statistics --- 00:06:13.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.198 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3194843 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3194843 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3194843 ']' 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:13.198 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.198 [2024-11-20 07:20:30.645455] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:06:13.198 [2024-11-20 07:20:30.645522] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.198 [2024-11-20 07:20:30.743955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.198 [2024-11-20 07:20:30.794915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:13.198 [2024-11-20 07:20:30.794966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:13.198 [2024-11-20 07:20:30.794976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:13.198 [2024-11-20 07:20:30.794984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:13.198 [2024-11-20 07:20:30.794990] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:13.198 [2024-11-20 07:20:30.796801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.198 [2024-11-20 07:20:30.796851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.459 [2024-11-20 07:20:31.508981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.459 [2024-11-20 07:20:31.533310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.459 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.459 NULL1 00:06:13.460 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.460 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:13.460 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.460 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.460 Delay0 00:06:13.460 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.460 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.460 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.460 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.460 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.460 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3195113 00:06:13.460 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:13.460 07:20:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:13.460 [2024-11-20 07:20:31.660339] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:16.041 07:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:16.041 07:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.041 07:20:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 starting I/O failed: -6 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 starting I/O failed: -6 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 starting I/O failed: -6 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 starting I/O failed: -6 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 starting I/O failed: -6 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 starting I/O failed: -6 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 starting I/O failed: -6 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 starting I/O failed: -6 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 starting I/O failed: -6 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 starting I/O failed: -6 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 [2024-11-20 07:20:33.787997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ee2c0 is same with the state(6) to be set 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 starting I/O failed: -6 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 starting I/O failed: -6 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 starting I/O failed: -6 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 Read completed with error (sct=0, sc=8) 00:06:16.041 starting I/O failed: -6 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.041 Write completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Write completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 Read completed with error (sct=0, sc=8) 00:06:16.042 starting I/O failed: -6 00:06:16.042 starting I/O failed: -6 00:06:16.042 starting I/O failed: -6 00:06:16.042 starting I/O failed: -6 00:06:16.042 starting I/O failed: -6 00:06:16.042 starting I/O failed: -6 00:06:16.042 starting I/O failed: -6 00:06:16.042 starting I/O failed: -6 00:06:16.042 starting I/O failed: -6 00:06:16.042 starting I/O failed: -6 00:06:16.042 starting I/O failed: -6 00:06:16.042 starting I/O failed: -6 00:06:16.695 [2024-11-20 07:20:34.760189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ef5e0 is same with the state(6) to be set 00:06:16.695 Read completed with error (sct=0, sc=8) 00:06:16.695 Read completed with error (sct=0, sc=8) 00:06:16.695 Read completed with error (sct=0, sc=8) 00:06:16.695 Read completed with error (sct=0, sc=8) 00:06:16.695 Read completed with error (sct=0, sc=8) 00:06:16.695 Write completed with error (sct=0, sc=8) 00:06:16.695 Read completed with error (sct=0, sc=8) 00:06:16.695 Read completed with error (sct=0, sc=8) 00:06:16.695 Read completed with error (sct=0, sc=8) 00:06:16.695 Read completed with error (sct=0, sc=8) 00:06:16.695 Write completed with error (sct=0, sc=8) 00:06:16.695 Read completed with error (sct=0, sc=8) 00:06:16.695 Read completed with error (sct=0, sc=8) 00:06:16.695 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 [2024-11-20 07:20:34.791410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ee4a0 is same with the state(6) to be set 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 [2024-11-20 07:20:34.791809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ee0e0 is same with the state(6) to be set 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 [2024-11-20 07:20:34.794143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4ad400d7e0 is same with the state(6) to be set 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Write completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 Read completed with error (sct=0, sc=8) 00:06:16.696 [2024-11-20 07:20:34.794278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4ad400d020 is same with the state(6) to be set 00:06:16.696 Initializing NVMe Controllers 00:06:16.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:16.696 Controller IO queue size 128, less than required. 00:06:16.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:16.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:16.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:16.696 Initialization complete. Launching workers. 00:06:16.696 ======================================================== 00:06:16.696 Latency(us) 00:06:16.696 Device Information : IOPS MiB/s Average min max 00:06:16.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.11 0.08 909832.71 305.36 1008828.01 00:06:16.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 183.00 0.09 917838.07 383.67 1013076.13 00:06:16.696 ======================================================== 00:06:16.696 Total : 346.11 0.17 914065.42 305.36 1013076.13 00:06:16.696 00:06:16.696 [2024-11-20 07:20:34.794963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ef5e0 (9): Bad file descriptor 00:06:16.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:16.696 07:20:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.696 07:20:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:16.696 07:20:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3195113 00:06:16.696 07:20:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3195113 00:06:17.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3195113) - No such process 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3195113 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3195113 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3195113 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.293 [2024-11-20 07:20:35.323761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3195882 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3195882 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:17.293 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:17.293 [2024-11-20 07:20:35.423159] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:17.864 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:17.864 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3195882 00:06:17.864 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:18.434 07:20:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:18.434 07:20:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3195882 00:06:18.434 07:20:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:18.695 07:20:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:18.695 07:20:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3195882 00:06:18.695 07:20:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:19.266 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:19.266 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3195882 00:06:19.266 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:19.836 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:19.836 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3195882 00:06:19.836 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:20.407 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:20.407 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3195882 00:06:20.407 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:20.407 Initializing NVMe Controllers 00:06:20.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:20.407 Controller IO queue size 128, less than required. 00:06:20.407 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:20.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:20.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:20.407 Initialization complete. Launching workers. 00:06:20.407 ======================================================== 00:06:20.407 Latency(us) 00:06:20.407 Device Information : IOPS MiB/s Average min max 00:06:20.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001870.35 1000140.07 1004954.32 00:06:20.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002684.56 1000281.99 1007707.37 00:06:20.407 ======================================================== 00:06:20.407 Total : 256.00 0.12 1002277.46 1000140.07 1007707.37 00:06:20.407 00:06:20.978 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:20.978 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3195882 00:06:20.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3195882) - No such process 00:06:20.978 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3195882 00:06:20.978 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:20.978 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:20.978 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:20.978 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:20.978 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:20.978 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:20.978 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:20.978 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:20.978 rmmod nvme_tcp 00:06:20.978 rmmod nvme_fabrics 00:06:20.978 rmmod nvme_keyring 00:06:20.978 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:20.978 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:20.978 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:20.978 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3194843 ']' 00:06:20.979 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3194843 00:06:20.979 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3194843 ']' 00:06:20.979 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3194843 00:06:20.979 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:06:20.979 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:20.979 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3194843 00:06:20.979 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:20.979 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:20.979 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3194843' 00:06:20.979 killing process with pid 3194843 00:06:20.979 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3194843 00:06:20.979 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3194843 00:06:20.979 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:20.979 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:20.979 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:20.979 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:20.979 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:20.979 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:20.979 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:20.979 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:20.979 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:20.979 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.979 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.979 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:23.523 00:06:23.523 real 0m18.380s 00:06:23.523 user 0m30.837s 00:06:23.523 sys 0m6.746s 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.523 ************************************ 00:06:23.523 END TEST nvmf_delete_subsystem 00:06:23.523 ************************************ 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:23.523 ************************************ 00:06:23.523 START TEST nvmf_host_management 00:06:23.523 ************************************ 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:23.523 * Looking for test storage... 00:06:23.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.523 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:23.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.524 --rc genhtml_branch_coverage=1 00:06:23.524 --rc genhtml_function_coverage=1 00:06:23.524 --rc genhtml_legend=1 00:06:23.524 --rc geninfo_all_blocks=1 00:06:23.524 --rc geninfo_unexecuted_blocks=1 00:06:23.524 00:06:23.524 ' 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:23.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.524 --rc genhtml_branch_coverage=1 00:06:23.524 --rc genhtml_function_coverage=1 00:06:23.524 --rc genhtml_legend=1 00:06:23.524 --rc geninfo_all_blocks=1 00:06:23.524 --rc geninfo_unexecuted_blocks=1 00:06:23.524 00:06:23.524 ' 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:23.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.524 --rc genhtml_branch_coverage=1 00:06:23.524 --rc genhtml_function_coverage=1 00:06:23.524 --rc genhtml_legend=1 00:06:23.524 --rc geninfo_all_blocks=1 00:06:23.524 --rc geninfo_unexecuted_blocks=1 00:06:23.524 00:06:23.524 ' 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:23.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.524 --rc genhtml_branch_coverage=1 00:06:23.524 --rc genhtml_function_coverage=1 00:06:23.524 --rc genhtml_legend=1 00:06:23.524 --rc geninfo_all_blocks=1 00:06:23.524 --rc geninfo_unexecuted_blocks=1 00:06:23.524 00:06:23.524 ' 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:23.524 07:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.669 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:31.670 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:31.670 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:31.670 Found net devices under 0000:31:00.0: cvl_0_0 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:31.670 Found net devices under 0000:31:00.1: cvl_0_1 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:31.670 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:31.671 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:31.671 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:31.671 07:20:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:31.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:31.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:06:31.671 00:06:31.671 --- 10.0.0.2 ping statistics --- 00:06:31.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.671 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:31.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:31.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:06:31.671 00:06:31.671 --- 10.0.0.1 ping statistics --- 00:06:31.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.671 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3200935 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3200935 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3200935 ']' 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:31.671 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.671 [2024-11-20 07:20:49.147325] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:06:31.671 [2024-11-20 07:20:49.147391] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.671 [2024-11-20 07:20:49.245550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.671 [2024-11-20 07:20:49.298877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:31.671 [2024-11-20 07:20:49.298927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:31.671 [2024-11-20 07:20:49.298935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.671 [2024-11-20 07:20:49.298942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.671 [2024-11-20 07:20:49.298948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:31.671 [2024-11-20 07:20:49.301024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.671 [2024-11-20 07:20:49.301188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.671 [2024-11-20 07:20:49.301351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:31.671 [2024-11-20 07:20:49.301351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.933 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:31.933 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:31.933 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:31.933 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.933 07:20:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.933 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.933 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:31.933 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.933 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.933 [2024-11-20 07:20:50.024538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.933 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.933 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:31.933 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:31.933 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.933 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:31.933 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:31.933 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:31.933 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.933 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.933 Malloc0 00:06:31.933 [2024-11-20 07:20:50.109218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.933 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.933 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:31.933 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.933 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3201019 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3201019 /var/tmp/bdevperf.sock 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3201019 ']' 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:32.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:32.195 { 00:06:32.195 "params": { 00:06:32.195 "name": "Nvme$subsystem", 00:06:32.195 "trtype": "$TEST_TRANSPORT", 00:06:32.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:32.195 "adrfam": "ipv4", 00:06:32.195 "trsvcid": "$NVMF_PORT", 00:06:32.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:32.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:32.195 "hdgst": ${hdgst:-false}, 00:06:32.195 "ddgst": ${ddgst:-false} 00:06:32.195 }, 00:06:32.195 "method": "bdev_nvme_attach_controller" 00:06:32.195 } 00:06:32.195 EOF 00:06:32.195 )") 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:32.195 07:20:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:32.196 "params": { 00:06:32.196 "name": "Nvme0", 00:06:32.196 "trtype": "tcp", 00:06:32.196 "traddr": "10.0.0.2", 00:06:32.196 "adrfam": "ipv4", 00:06:32.196 "trsvcid": "4420", 00:06:32.196 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:32.196 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:32.196 "hdgst": false, 00:06:32.196 "ddgst": false 00:06:32.196 }, 00:06:32.196 "method": "bdev_nvme_attach_controller" 00:06:32.196 }' 00:06:32.196 [2024-11-20 07:20:50.220430] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:06:32.196 [2024-11-20 07:20:50.220499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201019 ] 00:06:32.196 [2024-11-20 07:20:50.314338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.196 [2024-11-20 07:20:50.368670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.457 Running I/O for 10 seconds... 00:06:33.031 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:33.031 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:33.031 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:33.031 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.031 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.031 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.031 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:33.031 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:33.031 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:33.031 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:33.032 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:33.032 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:33.032 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:33.032 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:33.032 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:33.032 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:33.032 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.032 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.032 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.032 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:06:33.032 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:06:33.032 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:33.032 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:33.032 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:33.032 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:33.032 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.032 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.032 [2024-11-20 07:20:51.117029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a910 is same with the state(6) to be set 00:06:33.032 [2024-11-20 07:20:51.117821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.032 [2024-11-20 07:20:51.117880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.032 [2024-11-20 07:20:51.117904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.032 [2024-11-20 07:20:51.117913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.032 [2024-11-20 07:20:51.117924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.032 [2024-11-20 07:20:51.117933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.117944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.117953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.117963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.117972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.117983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.117991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.033 [2024-11-20 07:20:51.118619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.033 [2024-11-20 07:20:51.118626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.118984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.118994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.119001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.119010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.034 [2024-11-20 07:20:51.119017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.034 [2024-11-20 07:20:51.119027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c5c60 is same with the state(6) to be set 00:06:33.034 [2024-11-20 07:20:51.120361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:33.034 task offset: 90112 on job bdev=Nvme0n1 fails 00:06:33.034 00:06:33.034 Latency(us) 00:06:33.034 [2024-11-20T06:20:51.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.034 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:33.034 Job: Nvme0n1 ended in about 0.53 seconds with error 00:06:33.034 Verification LBA range: start 0x0 length 0x400 00:06:33.034 Nvme0n1 : 0.53 1324.50 82.78 120.41 0.00 43189.72 8465.07 35826.35 00:06:33.034 [2024-11-20T06:20:51.244Z] =================================================================================================================== 00:06:33.034 [2024-11-20T06:20:51.244Z] Total : 1324.50 82.78 120.41 0.00 43189.72 8465.07 35826.35 00:06:33.034 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.034 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:33.034 [2024-11-20 07:20:51.122611] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.034 [2024-11-20 07:20:51.122652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b5280 (9): Bad file descriptor 00:06:33.034 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.034 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.034 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.034 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:33.034 [2024-11-20 07:20:51.185488] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:33.976 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3201019 00:06:33.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3201019) - No such process 00:06:33.976 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:33.976 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:33.976 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:33.976 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:33.976 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:33.976 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:33.976 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:33.976 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:33.976 { 00:06:33.976 "params": { 00:06:33.976 "name": "Nvme$subsystem", 00:06:33.976 "trtype": "$TEST_TRANSPORT", 00:06:33.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:33.976 "adrfam": "ipv4", 00:06:33.976 "trsvcid": "$NVMF_PORT", 00:06:33.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:33.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:33.976 "hdgst": ${hdgst:-false}, 00:06:33.976 "ddgst": ${ddgst:-false} 00:06:33.976 }, 00:06:33.976 "method": "bdev_nvme_attach_controller" 00:06:33.976 } 00:06:33.976 EOF 00:06:33.976 )") 00:06:33.976 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:33.976 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:33.976 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:33.976 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:33.976 "params": { 00:06:33.976 "name": "Nvme0", 00:06:33.976 "trtype": "tcp", 00:06:33.976 "traddr": "10.0.0.2", 00:06:33.976 "adrfam": "ipv4", 00:06:33.976 "trsvcid": "4420", 00:06:33.976 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:33.976 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:33.976 "hdgst": false, 00:06:33.976 "ddgst": false 00:06:33.976 }, 00:06:33.976 "method": "bdev_nvme_attach_controller" 00:06:33.976 }' 00:06:34.237 [2024-11-20 07:20:52.193347] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:06:34.237 [2024-11-20 07:20:52.193401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201484 ] 00:06:34.237 [2024-11-20 07:20:52.281724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.237 [2024-11-20 07:20:52.316652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.498 Running I/O for 1 seconds... 00:06:35.440 1598.00 IOPS, 99.88 MiB/s 00:06:35.440 Latency(us) 00:06:35.440 [2024-11-20T06:20:53.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:35.440 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:35.440 Verification LBA range: start 0x0 length 0x400 00:06:35.440 Nvme0n1 : 1.04 1598.87 99.93 0.00 0.00 39339.23 5843.63 32768.00 00:06:35.440 [2024-11-20T06:20:53.650Z] =================================================================================================================== 00:06:35.440 [2024-11-20T06:20:53.650Z] Total : 1598.87 99.93 0.00 0.00 39339.23 5843.63 32768.00 00:06:35.440 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:35.440 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:35.440 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:35.440 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:35.440 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:35.440 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:35.440 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:35.440 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:35.440 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:35.440 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:35.440 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:35.440 rmmod nvme_tcp 00:06:35.704 rmmod nvme_fabrics 00:06:35.704 rmmod nvme_keyring 00:06:35.704 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:35.704 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:35.704 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:35.704 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3200935 ']' 00:06:35.704 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3200935 00:06:35.704 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3200935 ']' 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3200935 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3200935 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3200935' 00:06:35.705 killing process with pid 3200935 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3200935 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3200935 00:06:35.705 [2024-11-20 07:20:53.865323] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.705 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.252 07:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:38.252 07:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:38.252 00:06:38.252 real 0m14.700s 00:06:38.252 user 0m22.919s 00:06:38.252 sys 0m6.951s 00:06:38.252 07:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.252 07:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.252 ************************************ 00:06:38.252 END TEST nvmf_host_management 00:06:38.252 ************************************ 00:06:38.252 07:20:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:38.252 07:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:38.252 07:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:38.252 07:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:38.252 ************************************ 00:06:38.252 START TEST nvmf_lvol 00:06:38.252 ************************************ 00:06:38.252 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:38.252 * Looking for test storage... 00:06:38.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.252 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:38.252 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:38.252 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:38.252 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:38.252 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.252 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.252 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.252 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.252 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.252 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:38.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.253 --rc genhtml_branch_coverage=1 00:06:38.253 --rc genhtml_function_coverage=1 00:06:38.253 --rc genhtml_legend=1 00:06:38.253 --rc geninfo_all_blocks=1 00:06:38.253 --rc geninfo_unexecuted_blocks=1 00:06:38.253 00:06:38.253 ' 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:38.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.253 --rc genhtml_branch_coverage=1 00:06:38.253 --rc genhtml_function_coverage=1 00:06:38.253 --rc genhtml_legend=1 00:06:38.253 --rc geninfo_all_blocks=1 00:06:38.253 --rc geninfo_unexecuted_blocks=1 00:06:38.253 00:06:38.253 ' 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:38.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.253 --rc genhtml_branch_coverage=1 00:06:38.253 --rc genhtml_function_coverage=1 00:06:38.253 --rc genhtml_legend=1 00:06:38.253 --rc geninfo_all_blocks=1 00:06:38.253 --rc geninfo_unexecuted_blocks=1 00:06:38.253 00:06:38.253 ' 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:38.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.253 --rc genhtml_branch_coverage=1 00:06:38.253 --rc genhtml_function_coverage=1 00:06:38.253 --rc genhtml_legend=1 00:06:38.253 --rc geninfo_all_blocks=1 00:06:38.253 --rc geninfo_unexecuted_blocks=1 00:06:38.253 00:06:38.253 ' 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.253 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:38.254 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:38.254 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:38.254 07:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:46.405 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:46.405 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:46.405 Found net devices under 0000:31:00.0: cvl_0_0 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:46.405 Found net devices under 0000:31:00.1: cvl_0_1 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:46.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:46.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:06:46.405 00:06:46.405 --- 10.0.0.2 ping statistics --- 00:06:46.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.405 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:46.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:46.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:06:46.405 00:06:46.405 --- 10.0.0.1 ping statistics --- 00:06:46.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.405 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:46.405 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3206171 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3206171 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3206171 ']' 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:46.406 07:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.406 [2024-11-20 07:21:04.009960] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:06:46.406 [2024-11-20 07:21:04.010030] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.406 [2024-11-20 07:21:04.112221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.406 [2024-11-20 07:21:04.165140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.406 [2024-11-20 07:21:04.165195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.406 [2024-11-20 07:21:04.165204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.406 [2024-11-20 07:21:04.165211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.406 [2024-11-20 07:21:04.165217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.406 [2024-11-20 07:21:04.167188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.406 [2024-11-20 07:21:04.167349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.406 [2024-11-20 07:21:04.167350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.667 07:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:46.667 07:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:06:46.667 07:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:46.667 07:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:46.667 07:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.929 07:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.929 07:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:46.929 [2024-11-20 07:21:05.040855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.929 07:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:47.190 07:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:47.190 07:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:47.451 07:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:47.452 07:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:47.713 07:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:47.975 07:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bdd6af51-87a6-4828-b3fc-b590377c7792 00:06:47.975 07:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bdd6af51-87a6-4828-b3fc-b590377c7792 lvol 20 00:06:47.975 07:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=77c60f29-e605-4e11-8635-aa3f3680d97c 00:06:47.975 07:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:48.236 07:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 77c60f29-e605-4e11-8635-aa3f3680d97c 00:06:48.498 07:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:48.758 [2024-11-20 07:21:06.710129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:48.758 07:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:48.758 07:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3206873 00:06:48.758 07:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:48.758 07:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:50.141 07:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 77c60f29-e605-4e11-8635-aa3f3680d97c MY_SNAPSHOT 00:06:50.141 07:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8017dda2-0047-465b-809a-b23d34a76911 00:06:50.141 07:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 77c60f29-e605-4e11-8635-aa3f3680d97c 30 00:06:50.400 07:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8017dda2-0047-465b-809a-b23d34a76911 MY_CLONE 00:06:50.400 07:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=def054fe-addc-4d05-ae35-33ea5d9cd671 00:06:50.400 07:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate def054fe-addc-4d05-ae35-33ea5d9cd671 00:06:50.969 07:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3206873 00:06:59.108 Initializing NVMe Controllers 00:06:59.108 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:59.108 Controller IO queue size 128, less than required. 00:06:59.108 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:59.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:59.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:59.108 Initialization complete. Launching workers. 00:06:59.108 ======================================================== 00:06:59.108 Latency(us) 00:06:59.108 Device Information : IOPS MiB/s Average min max 00:06:59.108 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16272.10 63.56 7867.45 1697.76 49365.10 00:06:59.108 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17433.10 68.10 7344.29 494.60 57063.24 00:06:59.108 ======================================================== 00:06:59.108 Total : 33705.20 131.66 7596.86 494.60 57063.24 00:06:59.108 00:06:59.108 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:59.369 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 77c60f29-e605-4e11-8635-aa3f3680d97c 00:06:59.629 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bdd6af51-87a6-4828-b3fc-b590377c7792 00:06:59.629 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:59.629 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:59.629 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:59.629 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:59.629 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:59.629 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:59.629 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:59.629 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:59.629 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:59.629 rmmod nvme_tcp 00:06:59.629 rmmod nvme_fabrics 00:06:59.891 rmmod nvme_keyring 00:06:59.891 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:59.891 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:59.891 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:59.891 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3206171 ']' 00:06:59.891 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3206171 00:06:59.891 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3206171 ']' 00:06:59.891 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3206171 00:06:59.891 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:06:59.891 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:59.891 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3206171 00:06:59.891 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:59.891 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:59.891 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3206171' 00:06:59.891 killing process with pid 3206171 00:06:59.891 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3206171 00:06:59.891 07:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3206171 00:06:59.891 07:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:59.891 07:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:59.891 07:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:59.891 07:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:59.891 07:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:59.891 07:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:59.891 07:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:59.891 07:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:59.891 07:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:59.891 07:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.891 07:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.891 07:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:02.438 00:07:02.438 real 0m24.100s 00:07:02.438 user 1m4.441s 00:07:02.438 sys 0m8.931s 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:02.438 ************************************ 00:07:02.438 END TEST nvmf_lvol 00:07:02.438 ************************************ 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:02.438 ************************************ 00:07:02.438 START TEST nvmf_lvs_grow 00:07:02.438 ************************************ 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:02.438 * Looking for test storage... 00:07:02.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:02.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.438 --rc genhtml_branch_coverage=1 00:07:02.438 --rc genhtml_function_coverage=1 00:07:02.438 --rc genhtml_legend=1 00:07:02.438 --rc geninfo_all_blocks=1 00:07:02.438 --rc geninfo_unexecuted_blocks=1 00:07:02.438 00:07:02.438 ' 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:02.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.438 --rc genhtml_branch_coverage=1 00:07:02.438 --rc genhtml_function_coverage=1 00:07:02.438 --rc genhtml_legend=1 00:07:02.438 --rc geninfo_all_blocks=1 00:07:02.438 --rc geninfo_unexecuted_blocks=1 00:07:02.438 00:07:02.438 ' 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:02.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.438 --rc genhtml_branch_coverage=1 00:07:02.438 --rc genhtml_function_coverage=1 00:07:02.438 --rc genhtml_legend=1 00:07:02.438 --rc geninfo_all_blocks=1 00:07:02.438 --rc geninfo_unexecuted_blocks=1 00:07:02.438 00:07:02.438 ' 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:02.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.438 --rc genhtml_branch_coverage=1 00:07:02.438 --rc genhtml_function_coverage=1 00:07:02.438 --rc genhtml_legend=1 00:07:02.438 --rc geninfo_all_blocks=1 00:07:02.438 --rc geninfo_unexecuted_blocks=1 00:07:02.438 00:07:02.438 ' 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.438 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:02.439 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:10.586 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:10.586 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:10.586 Found net devices under 0000:31:00.0: cvl_0_0 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:10.586 Found net devices under 0000:31:00.1: cvl_0_1 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:10.586 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:10.587 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.587 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.587 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:10.587 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:10.587 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.587 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.587 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.587 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.587 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:10.587 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:10.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:07:10.587 00:07:10.587 --- 10.0.0.2 ping statistics --- 00:07:10.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.587 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:07:10.587 00:07:10.587 --- 10.0.0.1 ping statistics --- 00:07:10.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.587 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3213737 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3213737 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3213737 ']' 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.587 [2024-11-20 07:21:28.168851] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:07:10.587 [2024-11-20 07:21:28.168917] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.587 [2024-11-20 07:21:28.244333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.587 [2024-11-20 07:21:28.290370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.587 [2024-11-20 07:21:28.290416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.587 [2024-11-20 07:21:28.290425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.587 [2024-11-20 07:21:28.290430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.587 [2024-11-20 07:21:28.290434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.587 [2024-11-20 07:21:28.291172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:10.587 [2024-11-20 07:21:28.614512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.587 ************************************ 00:07:10.587 START TEST lvs_grow_clean 00:07:10.587 ************************************ 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:10.587 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:10.848 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:10.848 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:11.108 07:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=af866df4-b7f1-46f5-9dc0-b1db0d91c35d 00:07:11.108 07:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af866df4-b7f1-46f5-9dc0-b1db0d91c35d 00:07:11.108 07:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:11.108 07:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:11.108 07:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:11.108 07:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u af866df4-b7f1-46f5-9dc0-b1db0d91c35d lvol 150 00:07:11.370 07:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6d429108-dcc9-41da-bdb8-44a343058b91 00:07:11.370 07:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:11.370 07:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:11.630 [2024-11-20 07:21:29.656318] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:11.630 [2024-11-20 07:21:29.656387] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:11.630 true 00:07:11.630 07:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af866df4-b7f1-46f5-9dc0-b1db0d91c35d 00:07:11.630 07:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:11.891 07:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:11.891 07:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:11.891 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6d429108-dcc9-41da-bdb8-44a343058b91 00:07:12.153 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:12.415 [2024-11-20 07:21:30.382666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.415 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:12.415 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3214331 00:07:12.415 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:12.415 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:12.415 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3214331 /var/tmp/bdevperf.sock 00:07:12.415 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3214331 ']' 00:07:12.415 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:12.415 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:12.415 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:12.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:12.415 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:12.415 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:12.415 [2024-11-20 07:21:30.618187] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:07:12.415 [2024-11-20 07:21:30.618252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3214331 ] 00:07:12.743 [2024-11-20 07:21:30.710001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.743 [2024-11-20 07:21:30.763198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.316 07:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:13.316 07:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:13.316 07:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:13.577 Nvme0n1 00:07:13.577 07:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:13.838 [ 00:07:13.838 { 00:07:13.838 "name": "Nvme0n1", 00:07:13.838 "aliases": [ 00:07:13.838 "6d429108-dcc9-41da-bdb8-44a343058b91" 00:07:13.838 ], 00:07:13.838 "product_name": "NVMe disk", 00:07:13.838 "block_size": 4096, 00:07:13.838 "num_blocks": 38912, 00:07:13.838 "uuid": "6d429108-dcc9-41da-bdb8-44a343058b91", 00:07:13.838 "numa_id": 0, 00:07:13.838 "assigned_rate_limits": { 00:07:13.838 "rw_ios_per_sec": 0, 00:07:13.838 "rw_mbytes_per_sec": 0, 00:07:13.838 "r_mbytes_per_sec": 0, 00:07:13.838 "w_mbytes_per_sec": 0 00:07:13.838 }, 00:07:13.838 "claimed": false, 00:07:13.838 "zoned": false, 00:07:13.838 "supported_io_types": { 00:07:13.838 "read": true, 00:07:13.838 "write": true, 00:07:13.838 "unmap": true, 00:07:13.838 "flush": true, 00:07:13.838 "reset": true, 00:07:13.838 "nvme_admin": true, 00:07:13.838 "nvme_io": true, 00:07:13.838 "nvme_io_md": false, 00:07:13.838 "write_zeroes": true, 00:07:13.838 "zcopy": false, 00:07:13.838 "get_zone_info": false, 00:07:13.838 "zone_management": false, 00:07:13.838 "zone_append": false, 00:07:13.838 "compare": true, 00:07:13.838 "compare_and_write": true, 00:07:13.838 "abort": true, 00:07:13.838 "seek_hole": false, 00:07:13.838 "seek_data": false, 00:07:13.838 "copy": true, 00:07:13.838 "nvme_iov_md": false 00:07:13.838 }, 00:07:13.838 "memory_domains": [ 00:07:13.838 { 00:07:13.838 "dma_device_id": "system", 00:07:13.838 "dma_device_type": 1 00:07:13.838 } 00:07:13.838 ], 00:07:13.838 "driver_specific": { 00:07:13.838 "nvme": [ 00:07:13.838 { 00:07:13.838 "trid": { 00:07:13.838 "trtype": "TCP", 00:07:13.838 "adrfam": "IPv4", 00:07:13.838 "traddr": "10.0.0.2", 00:07:13.838 "trsvcid": "4420", 00:07:13.838 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:13.838 }, 00:07:13.838 "ctrlr_data": { 00:07:13.838 "cntlid": 1, 00:07:13.838 "vendor_id": "0x8086", 00:07:13.838 "model_number": "SPDK bdev Controller", 00:07:13.838 "serial_number": "SPDK0", 00:07:13.838 "firmware_revision": "25.01", 00:07:13.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:13.838 "oacs": { 00:07:13.838 "security": 0, 00:07:13.838 "format": 0, 00:07:13.838 "firmware": 0, 00:07:13.838 "ns_manage": 0 00:07:13.838 }, 00:07:13.838 "multi_ctrlr": true, 00:07:13.838 "ana_reporting": false 00:07:13.838 }, 00:07:13.838 "vs": { 00:07:13.838 "nvme_version": "1.3" 00:07:13.838 }, 00:07:13.838 "ns_data": { 00:07:13.838 "id": 1, 00:07:13.838 "can_share": true 00:07:13.838 } 00:07:13.838 } 00:07:13.838 ], 00:07:13.838 "mp_policy": "active_passive" 00:07:13.838 } 00:07:13.838 } 00:07:13.838 ] 00:07:13.838 07:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3214514 00:07:13.838 07:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:13.838 07:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:14.099 Running I/O for 10 seconds... 00:07:15.042 Latency(us) 00:07:15.042 [2024-11-20T06:21:33.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.042 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.042 Nvme0n1 : 1.00 21507.00 84.01 0.00 0.00 0.00 0.00 0.00 00:07:15.042 [2024-11-20T06:21:33.252Z] =================================================================================================================== 00:07:15.042 [2024-11-20T06:21:33.252Z] Total : 21507.00 84.01 0.00 0.00 0.00 0.00 0.00 00:07:15.042 00:07:15.984 07:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u af866df4-b7f1-46f5-9dc0-b1db0d91c35d 00:07:15.984 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.984 Nvme0n1 : 2.00 23384.00 91.34 0.00 0.00 0.00 0.00 0.00 00:07:15.984 [2024-11-20T06:21:34.194Z] =================================================================================================================== 00:07:15.984 [2024-11-20T06:21:34.194Z] Total : 23384.00 91.34 0.00 0.00 0.00 0.00 0.00 00:07:15.984 00:07:15.984 true 00:07:15.984 07:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af866df4-b7f1-46f5-9dc0-b1db0d91c35d 00:07:15.984 07:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:16.245 07:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:16.245 07:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:16.245 07:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3214514 00:07:17.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.185 Nvme0n1 : 3.00 24033.33 93.88 0.00 0.00 0.00 0.00 0.00 00:07:17.185 [2024-11-20T06:21:35.395Z] =================================================================================================================== 00:07:17.185 [2024-11-20T06:21:35.395Z] Total : 24033.33 93.88 0.00 0.00 0.00 0.00 0.00 00:07:17.185 00:07:18.126 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.126 Nvme0n1 : 4.00 24364.75 95.17 0.00 0.00 0.00 0.00 0.00 00:07:18.126 [2024-11-20T06:21:36.336Z] =================================================================================================================== 00:07:18.126 [2024-11-20T06:21:36.336Z] Total : 24364.75 95.17 0.00 0.00 0.00 0.00 0.00 00:07:18.126 00:07:19.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.066 Nvme0n1 : 5.00 24573.40 95.99 0.00 0.00 0.00 0.00 0.00 00:07:19.066 [2024-11-20T06:21:37.276Z] =================================================================================================================== 00:07:19.066 [2024-11-20T06:21:37.276Z] Total : 24573.40 95.99 0.00 0.00 0.00 0.00 0.00 00:07:19.066 00:07:20.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.117 Nvme0n1 : 6.00 24722.33 96.57 0.00 0.00 0.00 0.00 0.00 00:07:20.117 [2024-11-20T06:21:38.327Z] =================================================================================================================== 00:07:20.117 [2024-11-20T06:21:38.327Z] Total : 24722.33 96.57 0.00 0.00 0.00 0.00 0.00 00:07:20.117 00:07:21.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.059 Nvme0n1 : 7.00 24820.14 96.95 0.00 0.00 0.00 0.00 0.00 00:07:21.059 [2024-11-20T06:21:39.269Z] =================================================================================================================== 00:07:21.059 [2024-11-20T06:21:39.269Z] Total : 24820.14 96.95 0.00 0.00 0.00 0.00 0.00 00:07:21.059 00:07:21.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.998 Nvme0n1 : 8.00 24901.38 97.27 0.00 0.00 0.00 0.00 0.00 00:07:21.998 [2024-11-20T06:21:40.208Z] =================================================================================================================== 00:07:21.998 [2024-11-20T06:21:40.208Z] Total : 24901.38 97.27 0.00 0.00 0.00 0.00 0.00 00:07:21.998 00:07:22.939 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.939 Nvme0n1 : 9.00 24957.67 97.49 0.00 0.00 0.00 0.00 0.00 00:07:22.939 [2024-11-20T06:21:41.149Z] =================================================================================================================== 00:07:22.939 [2024-11-20T06:21:41.149Z] Total : 24957.67 97.49 0.00 0.00 0.00 0.00 0.00 00:07:22.939 00:07:23.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.877 Nvme0n1 : 10.00 25008.80 97.69 0.00 0.00 0.00 0.00 0.00 00:07:23.877 [2024-11-20T06:21:42.087Z] =================================================================================================================== 00:07:23.877 [2024-11-20T06:21:42.087Z] Total : 25008.80 97.69 0.00 0.00 0.00 0.00 0.00 00:07:23.877 00:07:24.138 00:07:24.138 Latency(us) 00:07:24.138 [2024-11-20T06:21:42.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.138 Nvme0n1 : 10.00 25006.56 97.68 0.00 0.00 5115.19 2553.17 13926.40 00:07:24.138 [2024-11-20T06:21:42.348Z] =================================================================================================================== 00:07:24.138 [2024-11-20T06:21:42.348Z] Total : 25006.56 97.68 0.00 0.00 5115.19 2553.17 13926.40 00:07:24.138 { 00:07:24.138 "results": [ 00:07:24.138 { 00:07:24.138 "job": "Nvme0n1", 00:07:24.138 "core_mask": "0x2", 00:07:24.138 "workload": "randwrite", 00:07:24.138 "status": "finished", 00:07:24.138 "queue_depth": 128, 00:07:24.138 "io_size": 4096, 00:07:24.138 "runtime": 10.003416, 00:07:24.138 "iops": 25006.55775986923, 00:07:24.138 "mibps": 97.68186624948918, 00:07:24.138 "io_failed": 0, 00:07:24.138 "io_timeout": 0, 00:07:24.138 "avg_latency_us": 5115.18582061768, 00:07:24.138 "min_latency_us": 2553.173333333333, 00:07:24.138 "max_latency_us": 13926.4 00:07:24.138 } 00:07:24.138 ], 00:07:24.138 "core_count": 1 00:07:24.138 } 00:07:24.138 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3214331 00:07:24.138 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3214331 ']' 00:07:24.138 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3214331 00:07:24.138 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:24.138 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:24.138 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3214331 00:07:24.138 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:24.138 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:24.138 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3214331' 00:07:24.138 killing process with pid 3214331 00:07:24.138 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3214331 00:07:24.138 Received shutdown signal, test time was about 10.000000 seconds 00:07:24.138 00:07:24.138 Latency(us) 00:07:24.138 [2024-11-20T06:21:42.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.138 [2024-11-20T06:21:42.348Z] =================================================================================================================== 00:07:24.138 [2024-11-20T06:21:42.348Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:24.138 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3214331 00:07:24.138 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:24.398 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:24.657 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af866df4-b7f1-46f5-9dc0-b1db0d91c35d 00:07:24.657 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:24.917 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:24.917 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:24.917 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:24.917 [2024-11-20 07:21:43.022353] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:24.917 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af866df4-b7f1-46f5-9dc0-b1db0d91c35d 00:07:24.917 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:24.917 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af866df4-b7f1-46f5-9dc0-b1db0d91c35d 00:07:24.917 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.917 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.917 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.917 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.917 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.917 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.917 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.917 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:24.917 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af866df4-b7f1-46f5-9dc0-b1db0d91c35d 00:07:25.177 request: 00:07:25.177 { 00:07:25.177 "uuid": "af866df4-b7f1-46f5-9dc0-b1db0d91c35d", 00:07:25.177 "method": "bdev_lvol_get_lvstores", 00:07:25.177 "req_id": 1 00:07:25.177 } 00:07:25.177 Got JSON-RPC error response 00:07:25.177 response: 00:07:25.177 { 00:07:25.177 "code": -19, 00:07:25.177 "message": "No such device" 00:07:25.177 } 00:07:25.177 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:25.177 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.177 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:25.177 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.177 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:25.437 aio_bdev 00:07:25.437 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6d429108-dcc9-41da-bdb8-44a343058b91 00:07:25.437 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=6d429108-dcc9-41da-bdb8-44a343058b91 00:07:25.437 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:25.437 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:25.437 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:25.437 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:25.437 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:25.437 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6d429108-dcc9-41da-bdb8-44a343058b91 -t 2000 00:07:25.698 [ 00:07:25.698 { 00:07:25.698 "name": "6d429108-dcc9-41da-bdb8-44a343058b91", 00:07:25.698 "aliases": [ 00:07:25.698 "lvs/lvol" 00:07:25.698 ], 00:07:25.698 "product_name": "Logical Volume", 00:07:25.698 "block_size": 4096, 00:07:25.698 "num_blocks": 38912, 00:07:25.698 "uuid": "6d429108-dcc9-41da-bdb8-44a343058b91", 00:07:25.698 "assigned_rate_limits": { 00:07:25.698 "rw_ios_per_sec": 0, 00:07:25.698 "rw_mbytes_per_sec": 0, 00:07:25.698 "r_mbytes_per_sec": 0, 00:07:25.698 "w_mbytes_per_sec": 0 00:07:25.698 }, 00:07:25.698 "claimed": false, 00:07:25.698 "zoned": false, 00:07:25.698 "supported_io_types": { 00:07:25.698 "read": true, 00:07:25.698 "write": true, 00:07:25.698 "unmap": true, 00:07:25.698 "flush": false, 00:07:25.698 "reset": true, 00:07:25.698 "nvme_admin": false, 00:07:25.698 "nvme_io": false, 00:07:25.698 "nvme_io_md": false, 00:07:25.698 "write_zeroes": true, 00:07:25.698 "zcopy": false, 00:07:25.698 "get_zone_info": false, 00:07:25.698 "zone_management": false, 00:07:25.698 "zone_append": false, 00:07:25.698 "compare": false, 00:07:25.698 "compare_and_write": false, 00:07:25.698 "abort": false, 00:07:25.698 "seek_hole": true, 00:07:25.698 "seek_data": true, 00:07:25.698 "copy": false, 00:07:25.698 "nvme_iov_md": false 00:07:25.698 }, 00:07:25.698 "driver_specific": { 00:07:25.698 "lvol": { 00:07:25.698 "lvol_store_uuid": "af866df4-b7f1-46f5-9dc0-b1db0d91c35d", 00:07:25.698 "base_bdev": "aio_bdev", 00:07:25.698 "thin_provision": false, 00:07:25.698 "num_allocated_clusters": 38, 00:07:25.698 "snapshot": false, 00:07:25.698 "clone": false, 00:07:25.698 "esnap_clone": false 00:07:25.698 } 00:07:25.698 } 00:07:25.698 } 00:07:25.698 ] 00:07:25.698 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:25.698 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af866df4-b7f1-46f5-9dc0-b1db0d91c35d 00:07:25.698 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:25.959 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:25.959 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af866df4-b7f1-46f5-9dc0-b1db0d91c35d 00:07:25.959 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:25.959 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:25.959 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6d429108-dcc9-41da-bdb8-44a343058b91 00:07:26.220 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u af866df4-b7f1-46f5-9dc0-b1db0d91c35d 00:07:26.482 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:26.482 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:26.482 00:07:26.482 real 0m15.973s 00:07:26.482 user 0m15.577s 00:07:26.482 sys 0m1.512s 00:07:26.482 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:26.482 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:26.482 ************************************ 00:07:26.482 END TEST lvs_grow_clean 00:07:26.482 ************************************ 00:07:26.743 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:26.743 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:26.743 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.743 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:26.743 ************************************ 00:07:26.743 START TEST lvs_grow_dirty 00:07:26.743 ************************************ 00:07:26.743 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:26.743 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:26.743 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:26.743 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:26.743 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:26.743 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:26.743 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:26.743 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:26.743 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:26.743 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:26.743 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:26.743 07:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:27.003 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a5775b60-0a98-4444-84ee-9c6635c2af65 00:07:27.003 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5775b60-0a98-4444-84ee-9c6635c2af65 00:07:27.003 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:27.264 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:27.264 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:27.264 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a5775b60-0a98-4444-84ee-9c6635c2af65 lvol 150 00:07:27.264 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=bdd3a400-88c4-4ada-8f1b-08ccac2ae396 00:07:27.264 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.264 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:27.525 [2024-11-20 07:21:45.607269] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:27.525 [2024-11-20 07:21:45.607312] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:27.525 true 00:07:27.525 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5775b60-0a98-4444-84ee-9c6635c2af65 00:07:27.525 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:27.786 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:27.786 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:27.786 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdd3a400-88c4-4ada-8f1b-08ccac2ae396 00:07:28.046 07:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:28.306 [2024-11-20 07:21:46.281224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.306 07:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:28.306 07:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3217544 00:07:28.306 07:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:28.306 07:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:28.306 07:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3217544 /var/tmp/bdevperf.sock 00:07:28.306 07:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3217544 ']' 00:07:28.306 07:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:28.306 07:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:28.306 07:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:28.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:28.306 07:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:28.306 07:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:28.566 [2024-11-20 07:21:46.522844] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:07:28.566 [2024-11-20 07:21:46.522911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3217544 ] 00:07:28.566 [2024-11-20 07:21:46.607575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.566 [2024-11-20 07:21:46.637291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.137 07:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:29.137 07:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:29.137 07:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:29.707 Nvme0n1 00:07:29.707 07:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:29.707 [ 00:07:29.707 { 00:07:29.707 "name": "Nvme0n1", 00:07:29.707 "aliases": [ 00:07:29.707 "bdd3a400-88c4-4ada-8f1b-08ccac2ae396" 00:07:29.707 ], 00:07:29.707 "product_name": "NVMe disk", 00:07:29.707 "block_size": 4096, 00:07:29.707 "num_blocks": 38912, 00:07:29.707 "uuid": "bdd3a400-88c4-4ada-8f1b-08ccac2ae396", 00:07:29.707 "numa_id": 0, 00:07:29.707 "assigned_rate_limits": { 00:07:29.707 "rw_ios_per_sec": 0, 00:07:29.707 "rw_mbytes_per_sec": 0, 00:07:29.707 "r_mbytes_per_sec": 0, 00:07:29.707 "w_mbytes_per_sec": 0 00:07:29.707 }, 00:07:29.707 "claimed": false, 00:07:29.707 "zoned": false, 00:07:29.707 "supported_io_types": { 00:07:29.707 "read": true, 00:07:29.707 "write": true, 00:07:29.707 "unmap": true, 00:07:29.707 "flush": true, 00:07:29.707 "reset": true, 00:07:29.707 "nvme_admin": true, 00:07:29.707 "nvme_io": true, 00:07:29.707 "nvme_io_md": false, 00:07:29.707 "write_zeroes": true, 00:07:29.708 "zcopy": false, 00:07:29.708 "get_zone_info": false, 00:07:29.708 "zone_management": false, 00:07:29.708 "zone_append": false, 00:07:29.708 "compare": true, 00:07:29.708 "compare_and_write": true, 00:07:29.708 "abort": true, 00:07:29.708 "seek_hole": false, 00:07:29.708 "seek_data": false, 00:07:29.708 "copy": true, 00:07:29.708 "nvme_iov_md": false 00:07:29.708 }, 00:07:29.708 "memory_domains": [ 00:07:29.708 { 00:07:29.708 "dma_device_id": "system", 00:07:29.708 "dma_device_type": 1 00:07:29.708 } 00:07:29.708 ], 00:07:29.708 "driver_specific": { 00:07:29.708 "nvme": [ 00:07:29.708 { 00:07:29.708 "trid": { 00:07:29.708 "trtype": "TCP", 00:07:29.708 "adrfam": "IPv4", 00:07:29.708 "traddr": "10.0.0.2", 00:07:29.708 "trsvcid": "4420", 00:07:29.708 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:29.708 }, 00:07:29.708 "ctrlr_data": { 00:07:29.708 "cntlid": 1, 00:07:29.708 "vendor_id": "0x8086", 00:07:29.708 "model_number": "SPDK bdev Controller", 00:07:29.708 "serial_number": "SPDK0", 00:07:29.708 "firmware_revision": "25.01", 00:07:29.708 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:29.708 "oacs": { 00:07:29.708 "security": 0, 00:07:29.708 "format": 0, 00:07:29.708 "firmware": 0, 00:07:29.708 "ns_manage": 0 00:07:29.708 }, 00:07:29.708 "multi_ctrlr": true, 00:07:29.708 "ana_reporting": false 00:07:29.708 }, 00:07:29.708 "vs": { 00:07:29.708 "nvme_version": "1.3" 00:07:29.708 }, 00:07:29.708 "ns_data": { 00:07:29.708 "id": 1, 00:07:29.708 "can_share": true 00:07:29.708 } 00:07:29.708 } 00:07:29.708 ], 00:07:29.708 "mp_policy": "active_passive" 00:07:29.708 } 00:07:29.708 } 00:07:29.708 ] 00:07:29.708 07:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3217878 00:07:29.708 07:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:29.708 07:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:29.969 Running I/O for 10 seconds... 00:07:30.910 Latency(us) 00:07:30.910 [2024-11-20T06:21:49.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.910 Nvme0n1 : 1.00 25047.00 97.84 0.00 0.00 0.00 0.00 0.00 00:07:30.910 [2024-11-20T06:21:49.120Z] =================================================================================================================== 00:07:30.910 [2024-11-20T06:21:49.120Z] Total : 25047.00 97.84 0.00 0.00 0.00 0.00 0.00 00:07:30.910 00:07:31.875 07:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a5775b60-0a98-4444-84ee-9c6635c2af65 00:07:31.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.875 Nvme0n1 : 2.00 25159.50 98.28 0.00 0.00 0.00 0.00 0.00 00:07:31.875 [2024-11-20T06:21:50.085Z] =================================================================================================================== 00:07:31.875 [2024-11-20T06:21:50.085Z] Total : 25159.50 98.28 0.00 0.00 0.00 0.00 0.00 00:07:31.875 00:07:31.875 true 00:07:31.875 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5775b60-0a98-4444-84ee-9c6635c2af65 00:07:31.875 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:32.136 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:32.136 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:32.136 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3217878 00:07:33.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.080 Nvme0n1 : 3.00 25196.33 98.42 0.00 0.00 0.00 0.00 0.00 00:07:33.080 [2024-11-20T06:21:51.290Z] =================================================================================================================== 00:07:33.080 [2024-11-20T06:21:51.290Z] Total : 25196.33 98.42 0.00 0.00 0.00 0.00 0.00 00:07:33.080 00:07:34.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.023 Nvme0n1 : 4.00 25249.25 98.63 0.00 0.00 0.00 0.00 0.00 00:07:34.023 [2024-11-20T06:21:52.233Z] =================================================================================================================== 00:07:34.023 [2024-11-20T06:21:52.233Z] Total : 25249.25 98.63 0.00 0.00 0.00 0.00 0.00 00:07:34.023 00:07:34.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.966 Nvme0n1 : 5.00 25293.80 98.80 0.00 0.00 0.00 0.00 0.00 00:07:34.966 [2024-11-20T06:21:53.176Z] =================================================================================================================== 00:07:34.966 [2024-11-20T06:21:53.176Z] Total : 25293.80 98.80 0.00 0.00 0.00 0.00 0.00 00:07:34.966 00:07:35.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.907 Nvme0n1 : 6.00 25323.00 98.92 0.00 0.00 0.00 0.00 0.00 00:07:35.907 [2024-11-20T06:21:54.117Z] =================================================================================================================== 00:07:35.907 [2024-11-20T06:21:54.117Z] Total : 25323.00 98.92 0.00 0.00 0.00 0.00 0.00 00:07:35.907 00:07:36.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.849 Nvme0n1 : 7.00 25344.29 99.00 0.00 0.00 0.00 0.00 0.00 00:07:36.849 [2024-11-20T06:21:55.059Z] =================================================================================================================== 00:07:36.849 [2024-11-20T06:21:55.059Z] Total : 25344.29 99.00 0.00 0.00 0.00 0.00 0.00 00:07:36.849 00:07:37.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.789 Nvme0n1 : 8.00 25368.25 99.09 0.00 0.00 0.00 0.00 0.00 00:07:37.789 [2024-11-20T06:21:55.999Z] =================================================================================================================== 00:07:37.789 [2024-11-20T06:21:55.999Z] Total : 25368.25 99.09 0.00 0.00 0.00 0.00 0.00 00:07:37.789 00:07:39.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.171 Nvme0n1 : 9.00 25381.67 99.15 0.00 0.00 0.00 0.00 0.00 00:07:39.171 [2024-11-20T06:21:57.381Z] =================================================================================================================== 00:07:39.171 [2024-11-20T06:21:57.381Z] Total : 25381.67 99.15 0.00 0.00 0.00 0.00 0.00 00:07:39.171 00:07:40.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.109 Nvme0n1 : 10.00 25395.20 99.20 0.00 0.00 0.00 0.00 0.00 00:07:40.110 [2024-11-20T06:21:58.320Z] =================================================================================================================== 00:07:40.110 [2024-11-20T06:21:58.320Z] Total : 25395.20 99.20 0.00 0.00 0.00 0.00 0.00 00:07:40.110 00:07:40.110 00:07:40.110 Latency(us) 00:07:40.110 [2024-11-20T06:21:58.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.110 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.110 Nvme0n1 : 10.00 25397.87 99.21 0.00 0.00 5036.78 3085.65 9338.88 00:07:40.110 [2024-11-20T06:21:58.320Z] =================================================================================================================== 00:07:40.110 [2024-11-20T06:21:58.320Z] Total : 25397.87 99.21 0.00 0.00 5036.78 3085.65 9338.88 00:07:40.110 { 00:07:40.110 "results": [ 00:07:40.110 { 00:07:40.110 "job": "Nvme0n1", 00:07:40.110 "core_mask": "0x2", 00:07:40.110 "workload": "randwrite", 00:07:40.110 "status": "finished", 00:07:40.110 "queue_depth": 128, 00:07:40.110 "io_size": 4096, 00:07:40.110 "runtime": 10.003989, 00:07:40.110 "iops": 25397.86879013961, 00:07:40.110 "mibps": 99.21042496148286, 00:07:40.110 "io_failed": 0, 00:07:40.110 "io_timeout": 0, 00:07:40.110 "avg_latency_us": 5036.781138539043, 00:07:40.110 "min_latency_us": 3085.653333333333, 00:07:40.110 "max_latency_us": 9338.88 00:07:40.110 } 00:07:40.110 ], 00:07:40.110 "core_count": 1 00:07:40.110 } 00:07:40.110 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3217544 00:07:40.110 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3217544 ']' 00:07:40.110 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3217544 00:07:40.110 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:07:40.110 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:40.110 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3217544 00:07:40.110 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:40.110 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:40.110 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3217544' 00:07:40.110 killing process with pid 3217544 00:07:40.110 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3217544 00:07:40.110 Received shutdown signal, test time was about 10.000000 seconds 00:07:40.110 00:07:40.110 Latency(us) 00:07:40.110 [2024-11-20T06:21:58.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.110 [2024-11-20T06:21:58.320Z] =================================================================================================================== 00:07:40.110 [2024-11-20T06:21:58.320Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:40.110 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3217544 00:07:40.110 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:40.371 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:40.371 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5775b60-0a98-4444-84ee-9c6635c2af65 00:07:40.371 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3213737 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3213737 00:07:40.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3213737 Killed "${NVMF_APP[@]}" "$@" 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3219914 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3219914 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3219914 ']' 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:40.631 07:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.631 [2024-11-20 07:21:58.768599] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:07:40.631 [2024-11-20 07:21:58.768654] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.890 [2024-11-20 07:21:58.856961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.890 [2024-11-20 07:21:58.886352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.890 [2024-11-20 07:21:58.886378] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.890 [2024-11-20 07:21:58.886385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.890 [2024-11-20 07:21:58.886390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.890 [2024-11-20 07:21:58.886394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.890 [2024-11-20 07:21:58.886832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.459 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:41.459 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:41.459 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:41.459 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:41.459 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:41.459 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.459 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:41.718 [2024-11-20 07:21:59.745511] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:41.718 [2024-11-20 07:21:59.745581] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:41.718 [2024-11-20 07:21:59.745603] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:41.718 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:41.718 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev bdd3a400-88c4-4ada-8f1b-08ccac2ae396 00:07:41.718 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=bdd3a400-88c4-4ada-8f1b-08ccac2ae396 00:07:41.718 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:41.718 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:41.718 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:41.718 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:41.718 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:41.978 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bdd3a400-88c4-4ada-8f1b-08ccac2ae396 -t 2000 00:07:41.978 [ 00:07:41.978 { 00:07:41.978 "name": "bdd3a400-88c4-4ada-8f1b-08ccac2ae396", 00:07:41.978 "aliases": [ 00:07:41.978 "lvs/lvol" 00:07:41.978 ], 00:07:41.978 "product_name": "Logical Volume", 00:07:41.978 "block_size": 4096, 00:07:41.978 "num_blocks": 38912, 00:07:41.978 "uuid": "bdd3a400-88c4-4ada-8f1b-08ccac2ae396", 00:07:41.978 "assigned_rate_limits": { 00:07:41.978 "rw_ios_per_sec": 0, 00:07:41.978 "rw_mbytes_per_sec": 0, 00:07:41.978 "r_mbytes_per_sec": 0, 00:07:41.978 "w_mbytes_per_sec": 0 00:07:41.978 }, 00:07:41.978 "claimed": false, 00:07:41.978 "zoned": false, 00:07:41.978 "supported_io_types": { 00:07:41.978 "read": true, 00:07:41.978 "write": true, 00:07:41.978 "unmap": true, 00:07:41.978 "flush": false, 00:07:41.978 "reset": true, 00:07:41.978 "nvme_admin": false, 00:07:41.978 "nvme_io": false, 00:07:41.978 "nvme_io_md": false, 00:07:41.978 "write_zeroes": true, 00:07:41.978 "zcopy": false, 00:07:41.978 "get_zone_info": false, 00:07:41.978 "zone_management": false, 00:07:41.978 "zone_append": false, 00:07:41.978 "compare": false, 00:07:41.978 "compare_and_write": false, 00:07:41.978 "abort": false, 00:07:41.978 "seek_hole": true, 00:07:41.978 "seek_data": true, 00:07:41.978 "copy": false, 00:07:41.978 "nvme_iov_md": false 00:07:41.978 }, 00:07:41.978 "driver_specific": { 00:07:41.978 "lvol": { 00:07:41.978 "lvol_store_uuid": "a5775b60-0a98-4444-84ee-9c6635c2af65", 00:07:41.978 "base_bdev": "aio_bdev", 00:07:41.978 "thin_provision": false, 00:07:41.978 "num_allocated_clusters": 38, 00:07:41.978 "snapshot": false, 00:07:41.978 "clone": false, 00:07:41.978 "esnap_clone": false 00:07:41.978 } 00:07:41.978 } 00:07:41.978 } 00:07:41.978 ] 00:07:41.978 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:41.978 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5775b60-0a98-4444-84ee-9c6635c2af65 00:07:41.978 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:42.238 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:42.238 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5775b60-0a98-4444-84ee-9c6635c2af65 00:07:42.238 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:42.497 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:42.497 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:42.497 [2024-11-20 07:22:00.634291] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:42.497 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5775b60-0a98-4444-84ee-9c6635c2af65 00:07:42.497 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:42.497 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5775b60-0a98-4444-84ee-9c6635c2af65 00:07:42.497 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.497 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.497 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.497 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.498 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.498 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.498 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.498 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:42.498 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5775b60-0a98-4444-84ee-9c6635c2af65 00:07:42.757 request: 00:07:42.757 { 00:07:42.757 "uuid": "a5775b60-0a98-4444-84ee-9c6635c2af65", 00:07:42.757 "method": "bdev_lvol_get_lvstores", 00:07:42.757 "req_id": 1 00:07:42.757 } 00:07:42.757 Got JSON-RPC error response 00:07:42.757 response: 00:07:42.757 { 00:07:42.757 "code": -19, 00:07:42.757 "message": "No such device" 00:07:42.757 } 00:07:42.757 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:42.757 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.757 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.757 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.757 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:43.017 aio_bdev 00:07:43.017 07:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bdd3a400-88c4-4ada-8f1b-08ccac2ae396 00:07:43.017 07:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=bdd3a400-88c4-4ada-8f1b-08ccac2ae396 00:07:43.017 07:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:43.017 07:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:43.017 07:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:43.017 07:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:43.017 07:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:43.017 07:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bdd3a400-88c4-4ada-8f1b-08ccac2ae396 -t 2000 00:07:43.277 [ 00:07:43.277 { 00:07:43.277 "name": "bdd3a400-88c4-4ada-8f1b-08ccac2ae396", 00:07:43.277 "aliases": [ 00:07:43.277 "lvs/lvol" 00:07:43.277 ], 00:07:43.277 "product_name": "Logical Volume", 00:07:43.277 "block_size": 4096, 00:07:43.277 "num_blocks": 38912, 00:07:43.277 "uuid": "bdd3a400-88c4-4ada-8f1b-08ccac2ae396", 00:07:43.277 "assigned_rate_limits": { 00:07:43.277 "rw_ios_per_sec": 0, 00:07:43.277 "rw_mbytes_per_sec": 0, 00:07:43.277 "r_mbytes_per_sec": 0, 00:07:43.277 "w_mbytes_per_sec": 0 00:07:43.277 }, 00:07:43.277 "claimed": false, 00:07:43.277 "zoned": false, 00:07:43.277 "supported_io_types": { 00:07:43.277 "read": true, 00:07:43.277 "write": true, 00:07:43.277 "unmap": true, 00:07:43.277 "flush": false, 00:07:43.277 "reset": true, 00:07:43.277 "nvme_admin": false, 00:07:43.277 "nvme_io": false, 00:07:43.277 "nvme_io_md": false, 00:07:43.277 "write_zeroes": true, 00:07:43.277 "zcopy": false, 00:07:43.277 "get_zone_info": false, 00:07:43.277 "zone_management": false, 00:07:43.277 "zone_append": false, 00:07:43.277 "compare": false, 00:07:43.277 "compare_and_write": false, 00:07:43.277 "abort": false, 00:07:43.277 "seek_hole": true, 00:07:43.277 "seek_data": true, 00:07:43.277 "copy": false, 00:07:43.277 "nvme_iov_md": false 00:07:43.277 }, 00:07:43.277 "driver_specific": { 00:07:43.277 "lvol": { 00:07:43.277 "lvol_store_uuid": "a5775b60-0a98-4444-84ee-9c6635c2af65", 00:07:43.277 "base_bdev": "aio_bdev", 00:07:43.277 "thin_provision": false, 00:07:43.277 "num_allocated_clusters": 38, 00:07:43.277 "snapshot": false, 00:07:43.277 "clone": false, 00:07:43.277 "esnap_clone": false 00:07:43.277 } 00:07:43.277 } 00:07:43.277 } 00:07:43.277 ] 00:07:43.277 07:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:43.277 07:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5775b60-0a98-4444-84ee-9c6635c2af65 00:07:43.277 07:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:43.537 07:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:43.537 07:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5775b60-0a98-4444-84ee-9c6635c2af65 00:07:43.537 07:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:43.537 07:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:43.537 07:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bdd3a400-88c4-4ada-8f1b-08ccac2ae396 00:07:43.796 07:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a5775b60-0a98-4444-84ee-9c6635c2af65 00:07:44.056 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:44.056 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:44.056 00:07:44.056 real 0m17.468s 00:07:44.056 user 0m45.368s 00:07:44.056 sys 0m2.990s 00:07:44.056 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:44.056 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:44.056 ************************************ 00:07:44.056 END TEST lvs_grow_dirty 00:07:44.056 ************************************ 00:07:44.056 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:44.056 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:07:44.056 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:07:44.056 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:07:44.056 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:44.056 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:07:44.056 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:07:44.056 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:07:44.056 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:44.056 nvmf_trace.0 00:07:44.315 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:07:44.315 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:44.315 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:44.315 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:44.315 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:44.315 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:44.315 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:44.315 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:44.315 rmmod nvme_tcp 00:07:44.316 rmmod nvme_fabrics 00:07:44.316 rmmod nvme_keyring 00:07:44.316 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:44.316 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:44.316 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:44.316 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3219914 ']' 00:07:44.316 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3219914 00:07:44.316 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3219914 ']' 00:07:44.316 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3219914 00:07:44.316 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:07:44.316 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:44.316 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3219914 00:07:44.316 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:44.316 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:44.316 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3219914' 00:07:44.316 killing process with pid 3219914 00:07:44.316 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3219914 00:07:44.316 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3219914 00:07:44.575 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:44.575 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:44.575 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:44.575 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:44.575 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:44.575 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:44.575 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:44.575 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:44.575 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:44.575 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.575 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.575 07:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.485 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:46.485 00:07:46.485 real 0m44.394s 00:07:46.485 user 1m7.194s 00:07:46.485 sys 0m10.750s 00:07:46.485 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:46.485 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.485 ************************************ 00:07:46.485 END TEST nvmf_lvs_grow 00:07:46.485 ************************************ 00:07:46.485 07:22:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:46.485 07:22:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:46.485 07:22:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:46.485 07:22:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.746 ************************************ 00:07:46.746 START TEST nvmf_bdev_io_wait 00:07:46.746 ************************************ 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:46.746 * Looking for test storage... 00:07:46.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:46.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.746 --rc genhtml_branch_coverage=1 00:07:46.746 --rc genhtml_function_coverage=1 00:07:46.746 --rc genhtml_legend=1 00:07:46.746 --rc geninfo_all_blocks=1 00:07:46.746 --rc geninfo_unexecuted_blocks=1 00:07:46.746 00:07:46.746 ' 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:46.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.746 --rc genhtml_branch_coverage=1 00:07:46.746 --rc genhtml_function_coverage=1 00:07:46.746 --rc genhtml_legend=1 00:07:46.746 --rc geninfo_all_blocks=1 00:07:46.746 --rc geninfo_unexecuted_blocks=1 00:07:46.746 00:07:46.746 ' 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:46.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.746 --rc genhtml_branch_coverage=1 00:07:46.746 --rc genhtml_function_coverage=1 00:07:46.746 --rc genhtml_legend=1 00:07:46.746 --rc geninfo_all_blocks=1 00:07:46.746 --rc geninfo_unexecuted_blocks=1 00:07:46.746 00:07:46.746 ' 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:46.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.746 --rc genhtml_branch_coverage=1 00:07:46.746 --rc genhtml_function_coverage=1 00:07:46.746 --rc genhtml_legend=1 00:07:46.746 --rc geninfo_all_blocks=1 00:07:46.746 --rc geninfo_unexecuted_blocks=1 00:07:46.746 00:07:46.746 ' 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.746 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.747 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.007 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:47.007 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:47.007 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:47.007 07:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:55.142 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:55.142 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:55.142 Found net devices under 0000:31:00.0: cvl_0_0 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:55.142 Found net devices under 0000:31:00.1: cvl_0_1 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:55.142 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:55.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:07:55.143 00:07:55.143 --- 10.0.0.2 ping statistics --- 00:07:55.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.143 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:55.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:07:55.143 00:07:55.143 --- 10.0.0.1 ping statistics --- 00:07:55.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.143 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3225025 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3225025 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3225025 ']' 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:55.143 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.143 [2024-11-20 07:22:12.622587] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:07:55.143 [2024-11-20 07:22:12.622652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.143 [2024-11-20 07:22:12.722256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:55.143 [2024-11-20 07:22:12.776677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.143 [2024-11-20 07:22:12.776728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.143 [2024-11-20 07:22:12.776737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.143 [2024-11-20 07:22:12.776757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.143 [2024-11-20 07:22:12.776767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.143 [2024-11-20 07:22:12.778828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.143 [2024-11-20 07:22:12.778991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.143 [2024-11-20 07:22:12.779150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.143 [2024-11-20 07:22:12.779152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.404 [2024-11-20 07:22:13.566408] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.404 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:55.405 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.405 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.405 Malloc0 00:07:55.405 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.405 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:55.405 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.405 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.666 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.666 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:55.666 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.666 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.666 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.666 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.666 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.666 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.666 [2024-11-20 07:22:13.632012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3225270 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3225273 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:55.667 { 00:07:55.667 "params": { 00:07:55.667 "name": "Nvme$subsystem", 00:07:55.667 "trtype": "$TEST_TRANSPORT", 00:07:55.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.667 "adrfam": "ipv4", 00:07:55.667 "trsvcid": "$NVMF_PORT", 00:07:55.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.667 "hdgst": ${hdgst:-false}, 00:07:55.667 "ddgst": ${ddgst:-false} 00:07:55.667 }, 00:07:55.667 "method": "bdev_nvme_attach_controller" 00:07:55.667 } 00:07:55.667 EOF 00:07:55.667 )") 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3225276 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3225280 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:55.667 { 00:07:55.667 "params": { 00:07:55.667 "name": "Nvme$subsystem", 00:07:55.667 "trtype": "$TEST_TRANSPORT", 00:07:55.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.667 "adrfam": "ipv4", 00:07:55.667 "trsvcid": "$NVMF_PORT", 00:07:55.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.667 "hdgst": ${hdgst:-false}, 00:07:55.667 "ddgst": ${ddgst:-false} 00:07:55.667 }, 00:07:55.667 "method": "bdev_nvme_attach_controller" 00:07:55.667 } 00:07:55.667 EOF 00:07:55.667 )") 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:55.667 { 00:07:55.667 "params": { 00:07:55.667 "name": "Nvme$subsystem", 00:07:55.667 "trtype": "$TEST_TRANSPORT", 00:07:55.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.667 "adrfam": "ipv4", 00:07:55.667 "trsvcid": "$NVMF_PORT", 00:07:55.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.667 "hdgst": ${hdgst:-false}, 00:07:55.667 "ddgst": ${ddgst:-false} 00:07:55.667 }, 00:07:55.667 "method": "bdev_nvme_attach_controller" 00:07:55.667 } 00:07:55.667 EOF 00:07:55.667 )") 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:55.667 { 00:07:55.667 "params": { 00:07:55.667 "name": "Nvme$subsystem", 00:07:55.667 "trtype": "$TEST_TRANSPORT", 00:07:55.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.667 "adrfam": "ipv4", 00:07:55.667 "trsvcid": "$NVMF_PORT", 00:07:55.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.667 "hdgst": ${hdgst:-false}, 00:07:55.667 "ddgst": ${ddgst:-false} 00:07:55.667 }, 00:07:55.667 "method": "bdev_nvme_attach_controller" 00:07:55.667 } 00:07:55.667 EOF 00:07:55.667 )") 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3225270 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:55.667 "params": { 00:07:55.667 "name": "Nvme1", 00:07:55.667 "trtype": "tcp", 00:07:55.667 "traddr": "10.0.0.2", 00:07:55.667 "adrfam": "ipv4", 00:07:55.667 "trsvcid": "4420", 00:07:55.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:55.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:55.667 "hdgst": false, 00:07:55.667 "ddgst": false 00:07:55.667 }, 00:07:55.667 "method": "bdev_nvme_attach_controller" 00:07:55.667 }' 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:55.667 "params": { 00:07:55.667 "name": "Nvme1", 00:07:55.667 "trtype": "tcp", 00:07:55.667 "traddr": "10.0.0.2", 00:07:55.667 "adrfam": "ipv4", 00:07:55.667 "trsvcid": "4420", 00:07:55.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:55.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:55.667 "hdgst": false, 00:07:55.667 "ddgst": false 00:07:55.667 }, 00:07:55.667 "method": "bdev_nvme_attach_controller" 00:07:55.667 }' 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:55.667 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:55.667 "params": { 00:07:55.667 "name": "Nvme1", 00:07:55.667 "trtype": "tcp", 00:07:55.667 "traddr": "10.0.0.2", 00:07:55.667 "adrfam": "ipv4", 00:07:55.667 "trsvcid": "4420", 00:07:55.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:55.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:55.667 "hdgst": false, 00:07:55.667 "ddgst": false 00:07:55.668 }, 00:07:55.668 "method": "bdev_nvme_attach_controller" 00:07:55.668 }' 00:07:55.668 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:55.668 07:22:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:55.668 "params": { 00:07:55.668 "name": "Nvme1", 00:07:55.668 "trtype": "tcp", 00:07:55.668 "traddr": "10.0.0.2", 00:07:55.668 "adrfam": "ipv4", 00:07:55.668 "trsvcid": "4420", 00:07:55.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:55.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:55.668 "hdgst": false, 00:07:55.668 "ddgst": false 00:07:55.668 }, 00:07:55.668 "method": "bdev_nvme_attach_controller" 00:07:55.668 }' 00:07:55.668 [2024-11-20 07:22:13.690296] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:07:55.668 [2024-11-20 07:22:13.690362] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:55.668 [2024-11-20 07:22:13.693632] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:07:55.668 [2024-11-20 07:22:13.693700] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:55.668 [2024-11-20 07:22:13.694946] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:07:55.668 [2024-11-20 07:22:13.694951] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:07:55.668 [2024-11-20 07:22:13.695012] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 07:22:13.695014] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:55.668 --proc-type=auto ] 00:07:55.928 [2024-11-20 07:22:13.913599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.928 [2024-11-20 07:22:13.957006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:55.928 [2024-11-20 07:22:13.983846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.928 [2024-11-20 07:22:14.019667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:55.928 [2024-11-20 07:22:14.053186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.929 [2024-11-20 07:22:14.086041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:55.929 [2024-11-20 07:22:14.109560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.189 [2024-11-20 07:22:14.147079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:56.189 Running I/O for 1 seconds... 00:07:56.189 Running I/O for 1 seconds... 00:07:56.189 Running I/O for 1 seconds... 00:07:56.449 Running I/O for 1 seconds... 00:07:57.021 11555.00 IOPS, 45.14 MiB/s [2024-11-20T06:22:15.231Z] 188568.00 IOPS, 736.59 MiB/s 00:07:57.021 Latency(us) 00:07:57.021 [2024-11-20T06:22:15.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.021 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:57.021 Nvme1n1 : 1.01 11597.60 45.30 0.00 0.00 10992.36 5925.55 17148.59 00:07:57.021 [2024-11-20T06:22:15.231Z] =================================================================================================================== 00:07:57.021 [2024-11-20T06:22:15.231Z] Total : 11597.60 45.30 0.00 0.00 10992.36 5925.55 17148.59 00:07:57.021 00:07:57.021 Latency(us) 00:07:57.021 [2024-11-20T06:22:15.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.021 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:57.021 Nvme1n1 : 1.00 188189.27 735.11 0.00 0.00 676.03 305.49 1979.73 00:07:57.021 [2024-11-20T06:22:15.231Z] =================================================================================================================== 00:07:57.021 [2024-11-20T06:22:15.231Z] Total : 188189.27 735.11 0.00 0.00 676.03 305.49 1979.73 00:07:57.281 9327.00 IOPS, 36.43 MiB/s 00:07:57.281 Latency(us) 00:07:57.281 [2024-11-20T06:22:15.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.281 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:57.281 Nvme1n1 : 1.01 9395.09 36.70 0.00 0.00 13569.13 4587.52 19988.48 00:07:57.281 [2024-11-20T06:22:15.491Z] =================================================================================================================== 00:07:57.281 [2024-11-20T06:22:15.491Z] Total : 9395.09 36.70 0.00 0.00 13569.13 4587.52 19988.48 00:07:57.281 10530.00 IOPS, 41.13 MiB/s 00:07:57.281 Latency(us) 00:07:57.281 [2024-11-20T06:22:15.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.281 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:57.281 Nvme1n1 : 1.01 10609.76 41.44 0.00 0.00 12028.57 3932.16 23920.64 00:07:57.281 [2024-11-20T06:22:15.491Z] =================================================================================================================== 00:07:57.281 [2024-11-20T06:22:15.491Z] Total : 10609.76 41.44 0.00 0.00 12028.57 3932.16 23920.64 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3225273 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3225276 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3225280 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:57.542 rmmod nvme_tcp 00:07:57.542 rmmod nvme_fabrics 00:07:57.542 rmmod nvme_keyring 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3225025 ']' 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3225025 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3225025 ']' 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3225025 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3225025 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3225025' 00:07:57.542 killing process with pid 3225025 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3225025 00:07:57.542 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3225025 00:07:57.803 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:57.803 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:57.803 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:57.803 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:57.803 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:57.803 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:57.803 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:57.803 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:57.803 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:57.803 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.803 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.803 07:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.350 07:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:00.350 00:08:00.350 real 0m13.264s 00:08:00.350 user 0m19.792s 00:08:00.350 sys 0m7.577s 00:08:00.350 07:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.350 07:22:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:00.350 ************************************ 00:08:00.350 END TEST nvmf_bdev_io_wait 00:08:00.350 ************************************ 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:00.350 ************************************ 00:08:00.350 START TEST nvmf_queue_depth 00:08:00.350 ************************************ 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:00.350 * Looking for test storage... 00:08:00.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:00.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.350 --rc genhtml_branch_coverage=1 00:08:00.350 --rc genhtml_function_coverage=1 00:08:00.350 --rc genhtml_legend=1 00:08:00.350 --rc geninfo_all_blocks=1 00:08:00.350 --rc geninfo_unexecuted_blocks=1 00:08:00.350 00:08:00.350 ' 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:00.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.350 --rc genhtml_branch_coverage=1 00:08:00.350 --rc genhtml_function_coverage=1 00:08:00.350 --rc genhtml_legend=1 00:08:00.350 --rc geninfo_all_blocks=1 00:08:00.350 --rc geninfo_unexecuted_blocks=1 00:08:00.350 00:08:00.350 ' 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:00.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.350 --rc genhtml_branch_coverage=1 00:08:00.350 --rc genhtml_function_coverage=1 00:08:00.350 --rc genhtml_legend=1 00:08:00.350 --rc geninfo_all_blocks=1 00:08:00.350 --rc geninfo_unexecuted_blocks=1 00:08:00.350 00:08:00.350 ' 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:00.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.350 --rc genhtml_branch_coverage=1 00:08:00.350 --rc genhtml_function_coverage=1 00:08:00.350 --rc genhtml_legend=1 00:08:00.350 --rc geninfo_all_blocks=1 00:08:00.350 --rc geninfo_unexecuted_blocks=1 00:08:00.350 00:08:00.350 ' 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:00.350 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:00.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:00.351 07:22:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.492 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.492 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:08.492 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:08.492 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:08.493 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:08.493 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:08.493 Found net devices under 0000:31:00.0: cvl_0_0 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:08.493 Found net devices under 0000:31:00.1: cvl_0_1 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:08.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:08:08.493 00:08:08.493 --- 10.0.0.2 ping statistics --- 00:08:08.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.493 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:08:08.493 00:08:08.493 --- 10.0.0.1 ping statistics --- 00:08:08.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.493 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:08:08.493 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.494 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:08.494 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:08.494 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.494 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:08.494 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:08.494 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.494 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:08.494 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:08.494 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:08.494 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:08.494 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:08.494 07:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.494 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3230044 00:08:08.494 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3230044 00:08:08.494 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3230044 ']' 00:08:08.494 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:08.494 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.494 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:08.494 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.494 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:08.494 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.494 [2024-11-20 07:22:26.060916] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:08:08.494 [2024-11-20 07:22:26.060981] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.494 [2024-11-20 07:22:26.163814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.494 [2024-11-20 07:22:26.214111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.494 [2024-11-20 07:22:26.214159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.494 [2024-11-20 07:22:26.214168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.494 [2024-11-20 07:22:26.214175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.494 [2024-11-20 07:22:26.214181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.494 [2024-11-20 07:22:26.215000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.755 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:08.755 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:08.755 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:08.755 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:08.755 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.755 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.756 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:08.756 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.756 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.756 [2024-11-20 07:22:26.919567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.756 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.756 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:08.756 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.756 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.756 Malloc0 00:08:08.756 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.756 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:08.756 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.756 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:08.756 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.017 [2024-11-20 07:22:26.980659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3230150 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3230150 /var/tmp/bdevperf.sock 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3230150 ']' 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:09.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:09.017 07:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.017 [2024-11-20 07:22:27.039727] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:08:09.017 [2024-11-20 07:22:27.039794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3230150 ] 00:08:09.017 [2024-11-20 07:22:27.133223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.017 [2024-11-20 07:22:27.185722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.960 07:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:09.960 07:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:09.960 07:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:09.960 07:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.960 07:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.960 NVMe0n1 00:08:09.960 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.960 07:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:09.961 Running I/O for 10 seconds... 00:08:11.963 8478.00 IOPS, 33.12 MiB/s [2024-11-20T06:22:31.554Z] 9723.50 IOPS, 37.98 MiB/s [2024-11-20T06:22:32.500Z] 10445.67 IOPS, 40.80 MiB/s [2024-11-20T06:22:33.442Z] 11176.75 IOPS, 43.66 MiB/s [2024-11-20T06:22:34.383Z] 11670.20 IOPS, 45.59 MiB/s [2024-11-20T06:22:35.323Z] 11949.50 IOPS, 46.68 MiB/s [2024-11-20T06:22:36.266Z] 12255.14 IOPS, 47.87 MiB/s [2024-11-20T06:22:37.208Z] 12409.25 IOPS, 48.47 MiB/s [2024-11-20T06:22:38.591Z] 12514.00 IOPS, 48.88 MiB/s [2024-11-20T06:22:38.591Z] 12638.00 IOPS, 49.37 MiB/s 00:08:20.381 Latency(us) 00:08:20.381 [2024-11-20T06:22:38.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.381 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:20.382 Verification LBA range: start 0x0 length 0x4000 00:08:20.382 NVMe0n1 : 10.05 12663.37 49.47 0.00 0.00 80558.65 10212.69 69031.25 00:08:20.382 [2024-11-20T06:22:38.592Z] =================================================================================================================== 00:08:20.382 [2024-11-20T06:22:38.592Z] Total : 12663.37 49.47 0.00 0.00 80558.65 10212.69 69031.25 00:08:20.382 { 00:08:20.382 "results": [ 00:08:20.382 { 00:08:20.382 "job": "NVMe0n1", 00:08:20.382 "core_mask": "0x1", 00:08:20.382 "workload": "verify", 00:08:20.382 "status": "finished", 00:08:20.382 "verify_range": { 00:08:20.382 "start": 0, 00:08:20.382 "length": 16384 00:08:20.382 }, 00:08:20.382 "queue_depth": 1024, 00:08:20.382 "io_size": 4096, 00:08:20.382 "runtime": 10.046927, 00:08:20.382 "iops": 12663.374582098586, 00:08:20.382 "mibps": 49.4663069613226, 00:08:20.382 "io_failed": 0, 00:08:20.382 "io_timeout": 0, 00:08:20.382 "avg_latency_us": 80558.65166116474, 00:08:20.382 "min_latency_us": 10212.693333333333, 00:08:20.382 "max_latency_us": 69031.25333333333 00:08:20.382 } 00:08:20.382 ], 00:08:20.382 "core_count": 1 00:08:20.382 } 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3230150 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3230150 ']' 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3230150 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3230150 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3230150' 00:08:20.382 killing process with pid 3230150 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3230150 00:08:20.382 Received shutdown signal, test time was about 10.000000 seconds 00:08:20.382 00:08:20.382 Latency(us) 00:08:20.382 [2024-11-20T06:22:38.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.382 [2024-11-20T06:22:38.592Z] =================================================================================================================== 00:08:20.382 [2024-11-20T06:22:38.592Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3230150 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.382 rmmod nvme_tcp 00:08:20.382 rmmod nvme_fabrics 00:08:20.382 rmmod nvme_keyring 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3230044 ']' 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3230044 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3230044 ']' 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3230044 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3230044 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3230044' 00:08:20.382 killing process with pid 3230044 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3230044 00:08:20.382 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3230044 00:08:20.643 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:20.643 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:20.643 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:20.643 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:20.643 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:20.643 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:20.643 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:20.643 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:20.643 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:20.643 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.643 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.643 07:22:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.561 07:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:22.561 00:08:22.561 real 0m22.701s 00:08:22.561 user 0m25.880s 00:08:22.561 sys 0m7.131s 00:08:22.561 07:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:22.561 07:22:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.561 ************************************ 00:08:22.561 END TEST nvmf_queue_depth 00:08:22.561 ************************************ 00:08:22.823 07:22:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:22.823 07:22:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:22.823 07:22:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:22.823 07:22:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:22.823 ************************************ 00:08:22.823 START TEST nvmf_target_multipath 00:08:22.823 ************************************ 00:08:22.823 07:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:22.823 * Looking for test storage... 00:08:22.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.823 07:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:22.823 07:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:22.823 07:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:22.823 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:22.823 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.823 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.823 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.823 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.823 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:22.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.824 --rc genhtml_branch_coverage=1 00:08:22.824 --rc genhtml_function_coverage=1 00:08:22.824 --rc genhtml_legend=1 00:08:22.824 --rc geninfo_all_blocks=1 00:08:22.824 --rc geninfo_unexecuted_blocks=1 00:08:22.824 00:08:22.824 ' 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:22.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.824 --rc genhtml_branch_coverage=1 00:08:22.824 --rc genhtml_function_coverage=1 00:08:22.824 --rc genhtml_legend=1 00:08:22.824 --rc geninfo_all_blocks=1 00:08:22.824 --rc geninfo_unexecuted_blocks=1 00:08:22.824 00:08:22.824 ' 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:22.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.824 --rc genhtml_branch_coverage=1 00:08:22.824 --rc genhtml_function_coverage=1 00:08:22.824 --rc genhtml_legend=1 00:08:22.824 --rc geninfo_all_blocks=1 00:08:22.824 --rc geninfo_unexecuted_blocks=1 00:08:22.824 00:08:22.824 ' 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:22.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.824 --rc genhtml_branch_coverage=1 00:08:22.824 --rc genhtml_function_coverage=1 00:08:22.824 --rc genhtml_legend=1 00:08:22.824 --rc geninfo_all_blocks=1 00:08:22.824 --rc geninfo_unexecuted_blocks=1 00:08:22.824 00:08:22.824 ' 00:08:22.824 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:23.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:23.087 07:22:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:31.233 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:31.233 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.233 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:31.234 Found net devices under 0000:31:00.0: cvl_0_0 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:31.234 Found net devices under 0000:31:00.1: cvl_0_1 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:31.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:08:31.234 00:08:31.234 --- 10.0.0.2 ping statistics --- 00:08:31.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.234 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:08:31.234 00:08:31.234 --- 10.0.0.1 ping statistics --- 00:08:31.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.234 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:31.234 only one NIC for nvmf test 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:31.234 rmmod nvme_tcp 00:08:31.234 rmmod nvme_fabrics 00:08:31.234 rmmod nvme_keyring 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.234 07:22:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.149 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:33.149 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:33.149 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:33.149 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:33.149 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:33.149 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:33.149 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:33.149 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.149 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:33.149 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:33.150 00:08:33.150 real 0m10.071s 00:08:33.150 user 0m2.210s 00:08:33.150 sys 0m5.786s 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:33.150 ************************************ 00:08:33.150 END TEST nvmf_target_multipath 00:08:33.150 ************************************ 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.150 ************************************ 00:08:33.150 START TEST nvmf_zcopy 00:08:33.150 ************************************ 00:08:33.150 07:22:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:33.150 * Looking for test storage... 00:08:33.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:33.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.150 --rc genhtml_branch_coverage=1 00:08:33.150 --rc genhtml_function_coverage=1 00:08:33.150 --rc genhtml_legend=1 00:08:33.150 --rc geninfo_all_blocks=1 00:08:33.150 --rc geninfo_unexecuted_blocks=1 00:08:33.150 00:08:33.150 ' 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:33.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.150 --rc genhtml_branch_coverage=1 00:08:33.150 --rc genhtml_function_coverage=1 00:08:33.150 --rc genhtml_legend=1 00:08:33.150 --rc geninfo_all_blocks=1 00:08:33.150 --rc geninfo_unexecuted_blocks=1 00:08:33.150 00:08:33.150 ' 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:33.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.150 --rc genhtml_branch_coverage=1 00:08:33.150 --rc genhtml_function_coverage=1 00:08:33.150 --rc genhtml_legend=1 00:08:33.150 --rc geninfo_all_blocks=1 00:08:33.150 --rc geninfo_unexecuted_blocks=1 00:08:33.150 00:08:33.150 ' 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:33.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.150 --rc genhtml_branch_coverage=1 00:08:33.150 --rc genhtml_function_coverage=1 00:08:33.150 --rc genhtml_legend=1 00:08:33.150 --rc geninfo_all_blocks=1 00:08:33.150 --rc geninfo_unexecuted_blocks=1 00:08:33.150 00:08:33.150 ' 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.150 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:33.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:33.151 07:22:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:41.289 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:41.289 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:41.289 Found net devices under 0000:31:00.0: cvl_0_0 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:41.289 Found net devices under 0000:31:00.1: cvl_0_1 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:41.289 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:41.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:08:41.290 00:08:41.290 --- 10.0.0.2 ping statistics --- 00:08:41.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.290 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:08:41.290 00:08:41.290 --- 10.0.0.1 ping statistics --- 00:08:41.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.290 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3240950 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3240950 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3240950 ']' 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:41.290 07:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.290 [2024-11-20 07:22:58.919144] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:08:41.290 [2024-11-20 07:22:58.919210] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.290 [2024-11-20 07:22:59.017924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.290 [2024-11-20 07:22:59.068119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.290 [2024-11-20 07:22:59.068167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.290 [2024-11-20 07:22:59.068176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.290 [2024-11-20 07:22:59.068183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.290 [2024-11-20 07:22:59.068189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.290 [2024-11-20 07:22:59.069030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.551 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:41.551 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:41.551 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:41.551 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:41.551 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.812 [2024-11-20 07:22:59.781090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.812 [2024-11-20 07:22:59.805412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.812 malloc0 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:41.812 { 00:08:41.812 "params": { 00:08:41.812 "name": "Nvme$subsystem", 00:08:41.812 "trtype": "$TEST_TRANSPORT", 00:08:41.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.812 "adrfam": "ipv4", 00:08:41.812 "trsvcid": "$NVMF_PORT", 00:08:41.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.812 "hdgst": ${hdgst:-false}, 00:08:41.812 "ddgst": ${ddgst:-false} 00:08:41.812 }, 00:08:41.812 "method": "bdev_nvme_attach_controller" 00:08:41.812 } 00:08:41.812 EOF 00:08:41.812 )") 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:41.812 07:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:41.812 "params": { 00:08:41.812 "name": "Nvme1", 00:08:41.812 "trtype": "tcp", 00:08:41.812 "traddr": "10.0.0.2", 00:08:41.812 "adrfam": "ipv4", 00:08:41.812 "trsvcid": "4420", 00:08:41.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.812 "hdgst": false, 00:08:41.812 "ddgst": false 00:08:41.812 }, 00:08:41.812 "method": "bdev_nvme_attach_controller" 00:08:41.812 }' 00:08:41.812 [2024-11-20 07:22:59.916381] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:08:41.812 [2024-11-20 07:22:59.916443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3241271 ] 00:08:41.812 [2024-11-20 07:23:00.009372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.073 [2024-11-20 07:23:00.067502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.333 Running I/O for 10 seconds... 00:08:44.216 7141.00 IOPS, 55.79 MiB/s [2024-11-20T06:23:03.366Z] 8426.00 IOPS, 65.83 MiB/s [2024-11-20T06:23:04.749Z] 8857.00 IOPS, 69.20 MiB/s [2024-11-20T06:23:05.689Z] 9073.75 IOPS, 70.89 MiB/s [2024-11-20T06:23:06.631Z] 9207.80 IOPS, 71.94 MiB/s [2024-11-20T06:23:07.571Z] 9296.83 IOPS, 72.63 MiB/s [2024-11-20T06:23:08.512Z] 9360.14 IOPS, 73.13 MiB/s [2024-11-20T06:23:09.452Z] 9405.12 IOPS, 73.48 MiB/s [2024-11-20T06:23:10.393Z] 9443.22 IOPS, 73.78 MiB/s [2024-11-20T06:23:10.393Z] 9469.30 IOPS, 73.98 MiB/s 00:08:52.183 Latency(us) 00:08:52.183 [2024-11-20T06:23:10.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.183 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:52.183 Verification LBA range: start 0x0 length 0x1000 00:08:52.183 Nvme1n1 : 10.01 9471.93 74.00 0.00 0.00 13466.46 2020.69 27634.35 00:08:52.183 [2024-11-20T06:23:10.393Z] =================================================================================================================== 00:08:52.183 [2024-11-20T06:23:10.393Z] Total : 9471.93 74.00 0.00 0.00 13466.46 2020.69 27634.35 00:08:52.444 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3243294 00:08:52.444 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:52.444 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.444 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:52.444 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:52.444 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:52.444 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.444 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.444 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.444 { 00:08:52.444 "params": { 00:08:52.444 "name": "Nvme$subsystem", 00:08:52.444 "trtype": "$TEST_TRANSPORT", 00:08:52.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.444 "adrfam": "ipv4", 00:08:52.444 "trsvcid": "$NVMF_PORT", 00:08:52.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.444 "hdgst": ${hdgst:-false}, 00:08:52.444 "ddgst": ${ddgst:-false} 00:08:52.444 }, 00:08:52.444 "method": "bdev_nvme_attach_controller" 00:08:52.444 } 00:08:52.444 EOF 00:08:52.444 )") 00:08:52.444 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:52.444 [2024-11-20 07:23:10.468565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.444 [2024-11-20 07:23:10.468591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.444 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:52.444 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:52.444 07:23:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.444 "params": { 00:08:52.444 "name": "Nvme1", 00:08:52.444 "trtype": "tcp", 00:08:52.444 "traddr": "10.0.0.2", 00:08:52.444 "adrfam": "ipv4", 00:08:52.444 "trsvcid": "4420", 00:08:52.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.444 "hdgst": false, 00:08:52.444 "ddgst": false 00:08:52.444 }, 00:08:52.444 "method": "bdev_nvme_attach_controller" 00:08:52.444 }' 00:08:52.444 [2024-11-20 07:23:10.480568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.444 [2024-11-20 07:23:10.480577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.444 [2024-11-20 07:23:10.492598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.444 [2024-11-20 07:23:10.492605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.444 [2024-11-20 07:23:10.504629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.444 [2024-11-20 07:23:10.504640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.444 [2024-11-20 07:23:10.511479] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:08:52.444 [2024-11-20 07:23:10.511525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3243294 ] 00:08:52.444 [2024-11-20 07:23:10.516660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.444 [2024-11-20 07:23:10.516667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.444 [2024-11-20 07:23:10.528691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.444 [2024-11-20 07:23:10.528698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.444 [2024-11-20 07:23:10.540721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.444 [2024-11-20 07:23:10.540727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.444 [2024-11-20 07:23:10.552754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.444 [2024-11-20 07:23:10.552761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.444 [2024-11-20 07:23:10.564785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.444 [2024-11-20 07:23:10.564792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.444 [2024-11-20 07:23:10.576815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.444 [2024-11-20 07:23:10.576822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.444 [2024-11-20 07:23:10.588847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.444 [2024-11-20 07:23:10.588854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.444 [2024-11-20 07:23:10.594596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.444 [2024-11-20 07:23:10.600879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.444 [2024-11-20 07:23:10.600886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.444 [2024-11-20 07:23:10.612910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.444 [2024-11-20 07:23:10.612918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.444 [2024-11-20 07:23:10.624386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.444 [2024-11-20 07:23:10.624941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.444 [2024-11-20 07:23:10.624948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.444 [2024-11-20 07:23:10.636977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.444 [2024-11-20 07:23:10.636985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.444 [2024-11-20 07:23:10.649009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.444 [2024-11-20 07:23:10.649022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.661036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.661046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.673066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.673074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.685098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.685105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.697139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.697157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.709163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.709173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.721193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.721202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.733223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.733230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.745253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.745260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.757286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.757293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.769320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.769330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.781354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.781365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.793384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.793391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.805423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.805438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 Running I/O for 5 seconds... 00:08:52.705 [2024-11-20 07:23:10.821044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.821060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.833709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.833724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.847618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.847633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.860254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.860269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.872814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.872829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.885722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.885737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.705 [2024-11-20 07:23:10.898052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.705 [2024-11-20 07:23:10.898067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:10.911069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:10.911084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:10.923680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:10.923695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:10.936170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:10.936189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:10.948580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:10.948596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:10.961561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:10.961576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:10.974711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:10.974726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:10.988308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:10.988323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:11.000657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:11.000672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:11.014152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:11.014167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:11.027793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:11.027808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:11.041460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:11.041475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:11.055068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:11.055084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:11.068515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:11.068530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:11.081734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:11.081752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:11.095359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:11.095374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:11.108142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:11.108156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:11.121752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:11.121767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:11.134338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:11.134352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:11.147042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:11.147056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.966 [2024-11-20 07:23:11.159539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.966 [2024-11-20 07:23:11.159554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.173362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.173377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.186541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.186556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.199796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.199811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.213048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.213063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.226413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.226427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.239296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.239310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.251853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.251867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.264843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.264858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.277647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.277661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.291496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.291510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.303806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.303821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.316249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.316264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.329101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.329116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.342750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.342764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.356008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.356023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.369154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.369169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.382240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.382255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.395283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.395297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.408557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.408571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.227 [2024-11-20 07:23:11.421849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.227 [2024-11-20 07:23:11.421863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.435068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.435083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.448662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.448677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.462329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.462343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.475717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.475732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.488870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.488884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.501617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.501631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.514386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.514400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.527679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.527693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.541295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.541308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.554662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.554677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.568320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.568335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.581674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.581688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.594999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.595013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.608566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.608580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.622147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.622161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.635309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.635324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.648590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.648604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.661075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.661090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.673620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.488 [2024-11-20 07:23:11.673634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.488 [2024-11-20 07:23:11.686538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.489 [2024-11-20 07:23:11.686552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.748 [2024-11-20 07:23:11.699103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.699118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.711468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.711482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.724903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.724917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.737665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.737679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.750780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.750794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.763456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.763470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.777103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.777117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.789843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.789857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.802381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.802395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 19047.00 IOPS, 148.80 MiB/s [2024-11-20T06:23:11.959Z] [2024-11-20 07:23:11.815645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.815661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.828767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.828782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.842393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.842407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.855553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.855568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.869012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.869026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.882770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.882784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.895957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.895971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.909454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.909469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.922712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.922730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.935443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.935457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.749 [2024-11-20 07:23:11.948869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.749 [2024-11-20 07:23:11.948883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.009 [2024-11-20 07:23:11.961589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.009 [2024-11-20 07:23:11.961604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:11.974128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:11.974142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:11.986669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:11.986684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:11.999341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:11.999355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:12.012726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:12.012741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:12.026062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:12.026076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:12.039501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:12.039515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:12.053117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:12.053131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:12.066704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:12.066719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:12.079491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:12.079506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:12.092458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:12.092473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:12.104925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:12.104939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:12.118292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:12.118307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:12.131758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:12.131773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:12.145131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:12.145145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:12.158228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:12.158242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:12.171726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:12.171749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:12.184818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:12.184832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:12.198236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:12.198251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.010 [2024-11-20 07:23:12.211296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.010 [2024-11-20 07:23:12.211310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.270 [2024-11-20 07:23:12.224427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.270 [2024-11-20 07:23:12.224442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.270 [2024-11-20 07:23:12.237743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.270 [2024-11-20 07:23:12.237762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.270 [2024-11-20 07:23:12.250971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.270 [2024-11-20 07:23:12.250985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.271 [2024-11-20 07:23:12.264352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.271 [2024-11-20 07:23:12.264367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.271 [2024-11-20 07:23:12.276959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.271 [2024-11-20 07:23:12.276974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.271 [2024-11-20 07:23:12.290685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.271 [2024-11-20 07:23:12.290699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.271 [2024-11-20 07:23:12.303874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.271 [2024-11-20 07:23:12.303888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.271 [2024-11-20 07:23:12.317575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.271 [2024-11-20 07:23:12.317589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.271 [2024-11-20 07:23:12.329972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.271 [2024-11-20 07:23:12.329986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.271 [2024-11-20 07:23:12.342519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.271 [2024-11-20 07:23:12.342533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.271 [2024-11-20 07:23:12.355985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.271 [2024-11-20 07:23:12.355999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.271 [2024-11-20 07:23:12.369625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.271 [2024-11-20 07:23:12.369640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.271 [2024-11-20 07:23:12.382928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.271 [2024-11-20 07:23:12.382942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.271 [2024-11-20 07:23:12.395662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.271 [2024-11-20 07:23:12.395677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.271 [2024-11-20 07:23:12.409234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.271 [2024-11-20 07:23:12.409249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.271 [2024-11-20 07:23:12.422703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.271 [2024-11-20 07:23:12.422721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.271 [2024-11-20 07:23:12.436336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.271 [2024-11-20 07:23:12.436351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.271 [2024-11-20 07:23:12.449086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.271 [2024-11-20 07:23:12.449100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.271 [2024-11-20 07:23:12.462689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.271 [2024-11-20 07:23:12.462703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.476548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.476563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.489805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.489819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.503329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.503343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.516807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.516821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.529411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.529425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.543108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.543122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.555955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.555969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.569109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.569125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.582541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.582555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.595503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.595517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.608887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.608901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.622232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.622247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.635150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.635164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.648512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.648526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.662228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.662243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.674518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.674536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.687995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.688010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.701066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.701081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.714289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.714304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.532 [2024-11-20 07:23:12.727709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.532 [2024-11-20 07:23:12.727724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.740378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.740392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.752751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.752765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.765223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.765237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.777991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.778005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.791315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.791330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.804214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.804228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 19164.50 IOPS, 149.72 MiB/s [2024-11-20T06:23:13.003Z] [2024-11-20 07:23:12.816555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.816570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.830058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.830072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.843379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.843393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.856562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.856576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.869843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.869857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.882573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.882587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.895166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.895181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.908788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.908802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.921851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.921865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.934381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.934395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.946467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.946481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.959683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.959697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.973200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.973214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.793 [2024-11-20 07:23:12.985998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.793 [2024-11-20 07:23:12.986013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.053 [2024-11-20 07:23:12.998872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.053 [2024-11-20 07:23:12.998887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.012121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.012135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.024966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.024981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.037557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.037571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.050926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.050940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.063819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.063833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.077316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.077330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.090856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.090871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.103522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.103537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.116193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.116207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.128830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.128844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.141643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.141658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.154371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.154385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.168012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.168026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.180568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.180582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.193236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.193250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.206629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.206644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.219444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.219458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.232566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.232580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.054 [2024-11-20 07:23:13.245927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.054 [2024-11-20 07:23:13.245941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.259443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.259457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.272183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.272197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.284762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.284776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.298118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.298132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.311709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.311723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.325165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.325180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.338690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.338704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.352315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.352330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.365799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.365814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.378595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.378609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.391184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.391199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.404011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.404026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.416738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.416758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.429415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.429430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.441758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.441772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.454990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.455004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.467732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.467751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.480009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.480023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.493403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.493418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.314 [2024-11-20 07:23:13.506961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.314 [2024-11-20 07:23:13.506976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.574 [2024-11-20 07:23:13.520526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.574 [2024-11-20 07:23:13.520541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.574 [2024-11-20 07:23:13.534165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.574 [2024-11-20 07:23:13.534179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.547034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.547048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.560114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.560128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.573094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.573108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.585519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.585533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.598393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.598407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.612060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.612074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.624303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.624317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.636955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.636969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.649062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.649080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.662473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.662487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.675835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.675849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.688543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.688557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.701476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.701490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.714654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.714668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.727976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.727991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.741378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.741392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.754323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.754337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.766975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.766991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.575 [2024-11-20 07:23:13.779341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.575 [2024-11-20 07:23:13.779355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.835 [2024-11-20 07:23:13.792711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:13.792726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:13.805611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:13.805625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 19179.00 IOPS, 149.84 MiB/s [2024-11-20T06:23:14.046Z] [2024-11-20 07:23:13.817949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:13.817964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:13.831735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:13.831754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:13.845080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:13.845094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:13.857891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:13.857905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:13.870783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:13.870797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:13.883468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:13.883482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:13.897295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:13.897313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:13.910004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:13.910018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:13.923503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:13.923517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:13.936837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:13.936851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:13.949612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:13.949626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:13.962287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:13.962301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:13.975308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:13.975322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:13.987969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:13.987983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:14.000563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:14.000578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:14.013665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:14.013680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:14.026985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:14.027000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.836 [2024-11-20 07:23:14.039550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.836 [2024-11-20 07:23:14.039564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.096 [2024-11-20 07:23:14.052510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.096 [2024-11-20 07:23:14.052525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.096 [2024-11-20 07:23:14.065274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.096 [2024-11-20 07:23:14.065289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.096 [2024-11-20 07:23:14.078507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.096 [2024-11-20 07:23:14.078521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.096 [2024-11-20 07:23:14.091698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.096 [2024-11-20 07:23:14.091712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.096 [2024-11-20 07:23:14.104571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.096 [2024-11-20 07:23:14.104585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.096 [2024-11-20 07:23:14.117723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.096 [2024-11-20 07:23:14.117737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.096 [2024-11-20 07:23:14.130969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.096 [2024-11-20 07:23:14.130983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.096 [2024-11-20 07:23:14.144531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.096 [2024-11-20 07:23:14.144550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.096 [2024-11-20 07:23:14.157478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.096 [2024-11-20 07:23:14.157493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.096 [2024-11-20 07:23:14.170135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.096 [2024-11-20 07:23:14.170149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.096 [2024-11-20 07:23:14.182995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.096 [2024-11-20 07:23:14.183009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.096 [2024-11-20 07:23:14.195379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.096 [2024-11-20 07:23:14.195394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.097 [2024-11-20 07:23:14.208119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.097 [2024-11-20 07:23:14.208134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.097 [2024-11-20 07:23:14.221487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.097 [2024-11-20 07:23:14.221502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.097 [2024-11-20 07:23:14.234401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.097 [2024-11-20 07:23:14.234416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.097 [2024-11-20 07:23:14.246900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.097 [2024-11-20 07:23:14.246914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.097 [2024-11-20 07:23:14.259905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.097 [2024-11-20 07:23:14.259927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.097 [2024-11-20 07:23:14.273174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.097 [2024-11-20 07:23:14.273188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.097 [2024-11-20 07:23:14.286618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.097 [2024-11-20 07:23:14.286632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.097 [2024-11-20 07:23:14.299808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.097 [2024-11-20 07:23:14.299822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.312724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.312739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.325138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.325152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.338954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.338968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.352549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.352564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.365765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.365780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.378549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.378564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.391926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.391941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.405640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.405654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.418560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.418575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.432073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.432088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.445271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.445285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.458831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.458847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.471935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.471950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.485266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.485281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.498884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.498899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.511677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.511692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.524275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.524290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.537139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.537154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.550314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.550329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.358 [2024-11-20 07:23:14.563457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.358 [2024-11-20 07:23:14.563472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.577004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.577018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.590556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.590570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.604020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.604035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.617540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.617555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.630907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.630921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.643293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.643308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.656784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.656799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.670139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.670154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.683175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.683189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.696289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.696304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.708860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.708874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.722173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.722188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.735659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.735674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.748544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.748559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.761264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.761279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.773832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.773847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.787230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.787244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.800648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.800663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.620 [2024-11-20 07:23:14.813684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.620 [2024-11-20 07:23:14.813698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.881 19189.25 IOPS, 149.92 MiB/s [2024-11-20T06:23:15.091Z] [2024-11-20 07:23:14.826955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.881 [2024-11-20 07:23:14.826971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.881 [2024-11-20 07:23:14.840224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.881 [2024-11-20 07:23:14.840243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.881 [2024-11-20 07:23:14.853294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.881 [2024-11-20 07:23:14.853309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.881 [2024-11-20 07:23:14.866103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.881 [2024-11-20 07:23:14.866118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.881 [2024-11-20 07:23:14.879203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.881 [2024-11-20 07:23:14.879221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.881 [2024-11-20 07:23:14.892582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.881 [2024-11-20 07:23:14.892596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.881 [2024-11-20 07:23:14.906582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.881 [2024-11-20 07:23:14.906596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.881 [2024-11-20 07:23:14.920062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.881 [2024-11-20 07:23:14.920076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.881 [2024-11-20 07:23:14.933056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.881 [2024-11-20 07:23:14.933071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.881 [2024-11-20 07:23:14.946409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.881 [2024-11-20 07:23:14.946424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.881 [2024-11-20 07:23:14.959561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.881 [2024-11-20 07:23:14.959576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.881 [2024-11-20 07:23:14.971928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.881 [2024-11-20 07:23:14.971942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.881 [2024-11-20 07:23:14.984394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.881 [2024-11-20 07:23:14.984408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.881 [2024-11-20 07:23:14.996663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.882 [2024-11-20 07:23:14.996677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.882 [2024-11-20 07:23:15.009340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.882 [2024-11-20 07:23:15.009355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.882 [2024-11-20 07:23:15.023167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.882 [2024-11-20 07:23:15.023182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.882 [2024-11-20 07:23:15.036141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.882 [2024-11-20 07:23:15.036156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.882 [2024-11-20 07:23:15.049400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.882 [2024-11-20 07:23:15.049414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.882 [2024-11-20 07:23:15.062810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.882 [2024-11-20 07:23:15.062824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.882 [2024-11-20 07:23:15.075330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.882 [2024-11-20 07:23:15.075344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.088853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.088868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.101611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.101625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.115216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.115231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.127873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.127894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.141711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.141725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.154077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.154091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.167227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.167241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.179776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.179791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.193333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.193348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.206156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.206170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.218593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.218607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.231848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.231863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.245249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.245263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.258434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.258448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.271738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.271757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.285422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.285437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.298603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.298617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.311504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.311518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.324531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.324545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.143 [2024-11-20 07:23:15.337826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.143 [2024-11-20 07:23:15.337840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.351287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.351301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.364560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.364575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.378013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.378031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.390724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.390739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.403714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.403728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.416941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.416956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.430350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.430364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.444035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.444049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.456566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.456580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.469676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.469690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.483283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.483297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.496646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.496660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.510067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.510081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.523191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.523205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.536233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.536247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.549431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.549445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.563293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.563308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.575789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.575804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.589083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.589097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.404 [2024-11-20 07:23:15.601677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.404 [2024-11-20 07:23:15.601691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.614582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.614596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.627982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.628000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.640803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.640817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.653876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.653890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.667336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.667350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.679571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.679585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.692897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.692911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.706025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.706039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.719611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.719626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.732080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.732095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.745143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.745158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.758222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.758236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.771263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.771277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.784708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.784722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.797494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.797508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.810994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.811008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 19203.20 IOPS, 150.03 MiB/s [2024-11-20T06:23:15.875Z] [2024-11-20 07:23:15.823771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.823786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 00:08:57.665 Latency(us) 00:08:57.665 [2024-11-20T06:23:15.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.665 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:57.665 Nvme1n1 : 5.01 19205.81 150.05 0.00 0.00 6658.67 3058.35 18896.21 00:08:57.665 [2024-11-20T06:23:15.875Z] =================================================================================================================== 00:08:57.665 [2024-11-20T06:23:15.875Z] Total : 19205.81 150.05 0.00 0.00 6658.67 3058.35 18896.21 00:08:57.665 [2024-11-20 07:23:15.833129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.833141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.845165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.845178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.857192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.857204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-20 07:23:15.869222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-20 07:23:15.869233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.926 [2024-11-20 07:23:15.881248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.926 [2024-11-20 07:23:15.881258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.926 [2024-11-20 07:23:15.893277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.926 [2024-11-20 07:23:15.893286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.926 [2024-11-20 07:23:15.905391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.926 [2024-11-20 07:23:15.905401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.926 [2024-11-20 07:23:15.917420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.926 [2024-11-20 07:23:15.917430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.926 [2024-11-20 07:23:15.929450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.926 [2024-11-20 07:23:15.929459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3243294) - No such process 00:08:57.926 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3243294 00:08:57.926 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.926 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.926 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.926 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.926 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:57.926 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.926 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.926 delay0 00:08:57.926 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.926 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:57.926 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.926 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.926 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.926 07:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:57.926 [2024-11-20 07:23:16.069717] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:06.059 Initializing NVMe Controllers 00:09:06.059 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:06.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:06.059 Initialization complete. Launching workers. 00:09:06.059 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 218, failed: 43849 00:09:06.059 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 43918, failed to submit 149 00:09:06.059 success 43856, unsuccessful 62, failed 0 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:06.059 rmmod nvme_tcp 00:09:06.059 rmmod nvme_fabrics 00:09:06.059 rmmod nvme_keyring 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3240950 ']' 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3240950 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3240950 ']' 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3240950 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3240950 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3240950' 00:09:06.059 killing process with pid 3240950 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3240950 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3240950 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:06.059 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:06.060 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:06.060 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:06.060 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:06.060 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:06.060 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.060 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.060 07:23:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.443 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:07.443 00:09:07.443 real 0m34.533s 00:09:07.443 user 0m45.009s 00:09:07.443 sys 0m12.154s 00:09:07.443 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:07.443 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.443 ************************************ 00:09:07.443 END TEST nvmf_zcopy 00:09:07.443 ************************************ 00:09:07.443 07:23:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:07.443 07:23:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:07.443 07:23:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:07.443 07:23:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:07.443 ************************************ 00:09:07.443 START TEST nvmf_nmic 00:09:07.443 ************************************ 00:09:07.443 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:07.705 * Looking for test storage... 00:09:07.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:07.705 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:07.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.706 --rc genhtml_branch_coverage=1 00:09:07.706 --rc genhtml_function_coverage=1 00:09:07.706 --rc genhtml_legend=1 00:09:07.706 --rc geninfo_all_blocks=1 00:09:07.706 --rc geninfo_unexecuted_blocks=1 00:09:07.706 00:09:07.706 ' 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:07.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.706 --rc genhtml_branch_coverage=1 00:09:07.706 --rc genhtml_function_coverage=1 00:09:07.706 --rc genhtml_legend=1 00:09:07.706 --rc geninfo_all_blocks=1 00:09:07.706 --rc geninfo_unexecuted_blocks=1 00:09:07.706 00:09:07.706 ' 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:07.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.706 --rc genhtml_branch_coverage=1 00:09:07.706 --rc genhtml_function_coverage=1 00:09:07.706 --rc genhtml_legend=1 00:09:07.706 --rc geninfo_all_blocks=1 00:09:07.706 --rc geninfo_unexecuted_blocks=1 00:09:07.706 00:09:07.706 ' 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:07.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.706 --rc genhtml_branch_coverage=1 00:09:07.706 --rc genhtml_function_coverage=1 00:09:07.706 --rc genhtml_legend=1 00:09:07.706 --rc geninfo_all_blocks=1 00:09:07.706 --rc geninfo_unexecuted_blocks=1 00:09:07.706 00:09:07.706 ' 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:07.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:07.706 07:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:15.983 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:15.983 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:15.983 Found net devices under 0000:31:00.0: cvl_0_0 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:15.983 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:15.984 Found net devices under 0000:31:00.1: cvl_0_1 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:15.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:09:15.984 00:09:15.984 --- 10.0.0.2 ping statistics --- 00:09:15.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.984 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:15.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:09:15.984 00:09:15.984 --- 10.0.0.1 ping statistics --- 00:09:15.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.984 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3250070 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3250070 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3250070 ']' 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:15.984 07:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.984 [2024-11-20 07:23:33.464570] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:09:15.984 [2024-11-20 07:23:33.464639] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.984 [2024-11-20 07:23:33.565429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.984 [2024-11-20 07:23:33.620012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.984 [2024-11-20 07:23:33.620065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.984 [2024-11-20 07:23:33.620074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.984 [2024-11-20 07:23:33.620081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.984 [2024-11-20 07:23:33.620088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.984 [2024-11-20 07:23:33.622175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.984 [2024-11-20 07:23:33.622337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.984 [2024-11-20 07:23:33.622499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.984 [2024-11-20 07:23:33.622499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.246 [2024-11-20 07:23:34.340317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.246 Malloc0 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.246 [2024-11-20 07:23:34.415402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:16.246 test case1: single bdev can't be used in multiple subsystems 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.246 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.246 [2024-11-20 07:23:34.451235] bdev.c:8318:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:16.246 [2024-11-20 07:23:34.451265] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:16.246 [2024-11-20 07:23:34.451273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.507 request: 00:09:16.507 { 00:09:16.507 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:16.507 "namespace": { 00:09:16.507 "bdev_name": "Malloc0", 00:09:16.507 "no_auto_visible": false 00:09:16.507 }, 00:09:16.507 "method": "nvmf_subsystem_add_ns", 00:09:16.507 "req_id": 1 00:09:16.507 } 00:09:16.507 Got JSON-RPC error response 00:09:16.507 response: 00:09:16.507 { 00:09:16.507 "code": -32602, 00:09:16.507 "message": "Invalid parameters" 00:09:16.507 } 00:09:16.507 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:16.507 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:16.507 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:16.507 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:16.507 Adding namespace failed - expected result. 00:09:16.507 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:16.507 test case2: host connect to nvmf target in multiple paths 00:09:16.507 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:16.507 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.507 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.507 [2024-11-20 07:23:34.463459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:16.507 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.507 07:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:17.892 07:23:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:19.803 07:23:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:19.803 07:23:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:19.803 07:23:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.803 07:23:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:19.803 07:23:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:21.740 07:23:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:21.740 07:23:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:21.740 07:23:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:21.740 07:23:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:21.740 07:23:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:21.740 07:23:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:21.740 07:23:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:21.740 [global] 00:09:21.740 thread=1 00:09:21.740 invalidate=1 00:09:21.740 rw=write 00:09:21.740 time_based=1 00:09:21.740 runtime=1 00:09:21.740 ioengine=libaio 00:09:21.740 direct=1 00:09:21.740 bs=4096 00:09:21.740 iodepth=1 00:09:21.740 norandommap=0 00:09:21.740 numjobs=1 00:09:21.740 00:09:21.740 verify_dump=1 00:09:21.740 verify_backlog=512 00:09:21.740 verify_state_save=0 00:09:21.740 do_verify=1 00:09:21.740 verify=crc32c-intel 00:09:21.740 [job0] 00:09:21.740 filename=/dev/nvme0n1 00:09:21.740 Could not set queue depth (nvme0n1) 00:09:22.007 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.007 fio-3.35 00:09:22.007 Starting 1 thread 00:09:23.391 00:09:23.391 job0: (groupid=0, jobs=1): err= 0: pid=3251561: Wed Nov 20 07:23:41 2024 00:09:23.391 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:23.391 slat (nsec): min=10304, max=64168, avg=27283.12, stdev=3632.60 00:09:23.391 clat (usec): min=721, max=1313, avg=1095.35, stdev=74.85 00:09:23.391 lat (usec): min=748, max=1340, avg=1122.63, stdev=74.84 00:09:23.391 clat percentiles (usec): 00:09:23.391 | 1.00th=[ 898], 5.00th=[ 971], 10.00th=[ 996], 20.00th=[ 1037], 00:09:23.391 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:09:23.391 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1205], 00:09:23.391 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1319], 99.95th=[ 1319], 00:09:23.391 | 99.99th=[ 1319] 00:09:23.391 write: IOPS=685, BW=2741KiB/s (2807kB/s)(2744KiB/1001msec); 0 zone resets 00:09:23.391 slat (nsec): min=4913, max=67122, avg=29125.92, stdev=10525.85 00:09:23.391 clat (usec): min=250, max=966, avg=576.74, stdev=97.27 00:09:23.391 lat (usec): min=260, max=980, avg=605.86, stdev=102.28 00:09:23.391 clat percentiles (usec): 00:09:23.391 | 1.00th=[ 334], 5.00th=[ 400], 10.00th=[ 445], 20.00th=[ 502], 00:09:23.391 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 578], 60.00th=[ 603], 00:09:23.391 | 70.00th=[ 635], 80.00th=[ 660], 90.00th=[ 701], 95.00th=[ 725], 00:09:23.391 | 99.00th=[ 766], 99.50th=[ 783], 99.90th=[ 963], 99.95th=[ 963], 00:09:23.391 | 99.99th=[ 963] 00:09:23.391 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:23.391 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:23.391 lat (usec) : 500=11.10%, 750=45.16%, 1000=5.76% 00:09:23.391 lat (msec) : 2=37.98% 00:09:23.391 cpu : usr=2.30%, sys=4.70%, ctx=1203, majf=0, minf=1 00:09:23.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.391 issued rwts: total=512,686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.391 00:09:23.391 Run status group 0 (all jobs): 00:09:23.391 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:09:23.391 WRITE: bw=2741KiB/s (2807kB/s), 2741KiB/s-2741KiB/s (2807kB/s-2807kB/s), io=2744KiB (2810kB), run=1001-1001msec 00:09:23.391 00:09:23.391 Disk stats (read/write): 00:09:23.391 nvme0n1: ios=562/518, merge=0/0, ticks=820/250, in_queue=1070, util=99.90% 00:09:23.391 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:23.391 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.391 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:23.391 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:23.391 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.391 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:23.391 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.391 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:23.391 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:23.391 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:23.391 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:23.391 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:23.391 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:23.391 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:23.391 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:23.391 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:23.391 rmmod nvme_tcp 00:09:23.391 rmmod nvme_fabrics 00:09:23.392 rmmod nvme_keyring 00:09:23.392 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.392 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:23.392 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:23.392 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3250070 ']' 00:09:23.392 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3250070 00:09:23.392 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3250070 ']' 00:09:23.392 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3250070 00:09:23.392 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:23.392 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:23.392 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3250070 00:09:23.392 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:23.392 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:23.392 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3250070' 00:09:23.392 killing process with pid 3250070 00:09:23.392 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3250070 00:09:23.392 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3250070 00:09:23.652 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:23.652 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:23.652 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:23.652 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:23.652 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:23.652 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:23.652 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:23.652 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.652 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:23.652 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.652 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.652 07:23:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.567 07:23:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:25.567 00:09:25.567 real 0m18.140s 00:09:25.567 user 0m50.926s 00:09:25.567 sys 0m6.629s 00:09:25.567 07:23:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:25.567 07:23:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.567 ************************************ 00:09:25.567 END TEST nvmf_nmic 00:09:25.567 ************************************ 00:09:25.828 07:23:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:25.828 07:23:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:25.829 07:23:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:25.829 07:23:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.829 ************************************ 00:09:25.829 START TEST nvmf_fio_target 00:09:25.829 ************************************ 00:09:25.829 07:23:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:25.829 * Looking for test storage... 00:09:25.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.829 07:23:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:25.829 07:23:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:25.829 07:23:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:25.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.829 --rc genhtml_branch_coverage=1 00:09:25.829 --rc genhtml_function_coverage=1 00:09:25.829 --rc genhtml_legend=1 00:09:25.829 --rc geninfo_all_blocks=1 00:09:25.829 --rc geninfo_unexecuted_blocks=1 00:09:25.829 00:09:25.829 ' 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:25.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.829 --rc genhtml_branch_coverage=1 00:09:25.829 --rc genhtml_function_coverage=1 00:09:25.829 --rc genhtml_legend=1 00:09:25.829 --rc geninfo_all_blocks=1 00:09:25.829 --rc geninfo_unexecuted_blocks=1 00:09:25.829 00:09:25.829 ' 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:25.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.829 --rc genhtml_branch_coverage=1 00:09:25.829 --rc genhtml_function_coverage=1 00:09:25.829 --rc genhtml_legend=1 00:09:25.829 --rc geninfo_all_blocks=1 00:09:25.829 --rc geninfo_unexecuted_blocks=1 00:09:25.829 00:09:25.829 ' 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:25.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.829 --rc genhtml_branch_coverage=1 00:09:25.829 --rc genhtml_function_coverage=1 00:09:25.829 --rc genhtml_legend=1 00:09:25.829 --rc geninfo_all_blocks=1 00:09:25.829 --rc geninfo_unexecuted_blocks=1 00:09:25.829 00:09:25.829 ' 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.829 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.090 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.090 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:26.090 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:26.091 07:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:34.236 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:34.236 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:34.236 Found net devices under 0000:31:00.0: cvl_0_0 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:34.236 Found net devices under 0000:31:00.1: cvl_0_1 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:34.236 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:34.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:09:34.237 00:09:34.237 --- 10.0.0.2 ping statistics --- 00:09:34.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.237 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:09:34.237 00:09:34.237 --- 10.0.0.1 ping statistics --- 00:09:34.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.237 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3256278 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3256278 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3256278 ']' 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:34.237 07:23:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.237 [2024-11-20 07:23:51.777547] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:09:34.237 [2024-11-20 07:23:51.777615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.237 [2024-11-20 07:23:51.881296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.237 [2024-11-20 07:23:51.933992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.237 [2024-11-20 07:23:51.934045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.237 [2024-11-20 07:23:51.934054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.237 [2024-11-20 07:23:51.934061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.237 [2024-11-20 07:23:51.934068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.237 [2024-11-20 07:23:51.936460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.237 [2024-11-20 07:23:51.936622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.237 [2024-11-20 07:23:51.936824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.237 [2024-11-20 07:23:51.936861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.498 07:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:34.498 07:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:34.498 07:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:34.498 07:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:34.498 07:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.498 07:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.498 07:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:34.758 [2024-11-20 07:23:52.811098] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.758 07:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.019 07:23:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:35.019 07:23:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.280 07:23:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:35.280 07:23:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.542 07:23:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:35.542 07:23:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.542 07:23:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:35.542 07:23:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:35.802 07:23:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.062 07:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:36.063 07:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.324 07:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:36.324 07:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.585 07:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:36.585 07:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:36.585 07:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:36.846 07:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:36.846 07:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:37.106 07:23:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:37.106 07:23:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:37.367 07:23:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.367 [2024-11-20 07:23:55.465098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.367 07:23:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:37.627 07:23:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:37.887 07:23:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:39.270 07:23:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:39.270 07:23:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:39.270 07:23:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:39.270 07:23:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:39.270 07:23:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:39.270 07:23:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:41.817 07:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:41.817 07:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:41.817 07:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:41.817 07:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:41.817 07:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:41.817 07:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:41.817 07:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:41.817 [global] 00:09:41.817 thread=1 00:09:41.817 invalidate=1 00:09:41.817 rw=write 00:09:41.817 time_based=1 00:09:41.817 runtime=1 00:09:41.817 ioengine=libaio 00:09:41.817 direct=1 00:09:41.817 bs=4096 00:09:41.817 iodepth=1 00:09:41.817 norandommap=0 00:09:41.817 numjobs=1 00:09:41.817 00:09:41.817 verify_dump=1 00:09:41.817 verify_backlog=512 00:09:41.817 verify_state_save=0 00:09:41.817 do_verify=1 00:09:41.817 verify=crc32c-intel 00:09:41.817 [job0] 00:09:41.817 filename=/dev/nvme0n1 00:09:41.817 [job1] 00:09:41.817 filename=/dev/nvme0n2 00:09:41.817 [job2] 00:09:41.817 filename=/dev/nvme0n3 00:09:41.817 [job3] 00:09:41.817 filename=/dev/nvme0n4 00:09:41.817 Could not set queue depth (nvme0n1) 00:09:41.817 Could not set queue depth (nvme0n2) 00:09:41.817 Could not set queue depth (nvme0n3) 00:09:41.817 Could not set queue depth (nvme0n4) 00:09:41.817 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.817 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.817 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.817 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.817 fio-3.35 00:09:41.817 Starting 4 threads 00:09:43.206 00:09:43.206 job0: (groupid=0, jobs=1): err= 0: pid=3258032: Wed Nov 20 07:24:01 2024 00:09:43.206 read: IOPS=610, BW=2442KiB/s (2500kB/s)(2444KiB/1001msec) 00:09:43.206 slat (nsec): min=6861, max=47640, avg=27361.75, stdev=4594.69 00:09:43.206 clat (usec): min=537, max=1274, avg=900.03, stdev=129.64 00:09:43.206 lat (usec): min=564, max=1299, avg=927.39, stdev=129.79 00:09:43.206 clat percentiles (usec): 00:09:43.206 | 1.00th=[ 619], 5.00th=[ 693], 10.00th=[ 742], 20.00th=[ 799], 00:09:43.206 | 30.00th=[ 832], 40.00th=[ 865], 50.00th=[ 889], 60.00th=[ 922], 00:09:43.206 | 70.00th=[ 963], 80.00th=[ 1004], 90.00th=[ 1074], 95.00th=[ 1139], 00:09:43.206 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1270], 99.95th=[ 1270], 00:09:43.206 | 99.99th=[ 1270] 00:09:43.206 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:43.206 slat (usec): min=5, max=1303, avg=24.44, stdev=47.81 00:09:43.206 clat (usec): min=86, max=892, avg=388.49, stdev=216.44 00:09:43.206 lat (usec): min=92, max=1745, avg=412.93, stdev=232.41 00:09:43.206 clat percentiles (usec): 00:09:43.206 | 1.00th=[ 90], 5.00th=[ 103], 10.00th=[ 110], 20.00th=[ 125], 00:09:43.206 | 30.00th=[ 145], 40.00th=[ 347], 50.00th=[ 416], 60.00th=[ 478], 00:09:43.206 | 70.00th=[ 537], 80.00th=[ 594], 90.00th=[ 668], 95.00th=[ 717], 00:09:43.206 | 99.00th=[ 832], 99.50th=[ 865], 99.90th=[ 873], 99.95th=[ 889], 00:09:43.206 | 99.99th=[ 889] 00:09:43.206 bw ( KiB/s): min= 4096, max= 4096, per=31.56%, avg=4096.00, stdev= 0.00, samples=1 00:09:43.206 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:43.206 lat (usec) : 100=2.02%, 250=18.72%, 500=18.84%, 750=25.38%, 1000=27.58% 00:09:43.206 lat (msec) : 2=7.46% 00:09:43.206 cpu : usr=2.50%, sys=5.40%, ctx=1640, majf=0, minf=1 00:09:43.206 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.206 issued rwts: total=611,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.206 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.206 job1: (groupid=0, jobs=1): err= 0: pid=3258057: Wed Nov 20 07:24:01 2024 00:09:43.206 read: IOPS=618, BW=2474KiB/s (2533kB/s)(2476KiB/1001msec) 00:09:43.206 slat (nsec): min=7111, max=62789, avg=25909.10, stdev=5840.42 00:09:43.206 clat (usec): min=323, max=1130, avg=796.08, stdev=136.93 00:09:43.206 lat (usec): min=331, max=1156, avg=821.99, stdev=137.94 00:09:43.206 clat percentiles (usec): 00:09:43.206 | 1.00th=[ 392], 5.00th=[ 529], 10.00th=[ 594], 20.00th=[ 701], 00:09:43.206 | 30.00th=[ 742], 40.00th=[ 783], 50.00th=[ 816], 60.00th=[ 848], 00:09:43.206 | 70.00th=[ 881], 80.00th=[ 906], 90.00th=[ 947], 95.00th=[ 971], 00:09:43.206 | 99.00th=[ 1057], 99.50th=[ 1074], 99.90th=[ 1123], 99.95th=[ 1123], 00:09:43.206 | 99.99th=[ 1123] 00:09:43.206 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:43.206 slat (nsec): min=9818, max=68611, avg=31361.74, stdev=9730.76 00:09:43.206 clat (usec): min=97, max=3478, avg=436.35, stdev=181.22 00:09:43.206 lat (usec): min=107, max=3490, avg=467.71, stdev=185.64 00:09:43.206 clat percentiles (usec): 00:09:43.206 | 1.00th=[ 108], 5.00th=[ 128], 10.00th=[ 229], 20.00th=[ 293], 00:09:43.206 | 30.00th=[ 367], 40.00th=[ 408], 50.00th=[ 445], 60.00th=[ 486], 00:09:43.206 | 70.00th=[ 529], 80.00th=[ 570], 90.00th=[ 627], 95.00th=[ 668], 00:09:43.206 | 99.00th=[ 758], 99.50th=[ 783], 99.90th=[ 889], 99.95th=[ 3490], 00:09:43.206 | 99.99th=[ 3490] 00:09:43.206 bw ( KiB/s): min= 4096, max= 4096, per=31.56%, avg=4096.00, stdev= 0.00, samples=1 00:09:43.206 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:43.206 lat (usec) : 100=0.12%, 250=7.30%, 500=34.02%, 750=32.26%, 1000=25.14% 00:09:43.206 lat (msec) : 2=1.10%, 4=0.06% 00:09:43.206 cpu : usr=2.50%, sys=4.80%, ctx=1644, majf=0, minf=2 00:09:43.206 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.206 issued rwts: total=619,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.206 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.206 job2: (groupid=0, jobs=1): err= 0: pid=3258062: Wed Nov 20 07:24:01 2024 00:09:43.206 read: IOPS=18, BW=73.2KiB/s (75.0kB/s)(76.0KiB/1038msec) 00:09:43.206 slat (nsec): min=9294, max=31609, avg=26215.95, stdev=4241.66 00:09:43.206 clat (usec): min=435, max=42189, avg=37573.46, stdev=12988.97 00:09:43.206 lat (usec): min=467, max=42198, avg=37599.68, stdev=12987.89 00:09:43.206 clat percentiles (usec): 00:09:43.206 | 1.00th=[ 437], 5.00th=[ 437], 10.00th=[ 1004], 20.00th=[41681], 00:09:43.206 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:43.206 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:43.206 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:43.206 | 99.99th=[42206] 00:09:43.206 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:09:43.206 slat (nsec): min=10485, max=56657, avg=32135.02, stdev=8823.95 00:09:43.206 clat (usec): min=215, max=966, avg=592.97, stdev=149.28 00:09:43.206 lat (usec): min=235, max=1002, avg=625.11, stdev=151.56 00:09:43.206 clat percentiles (usec): 00:09:43.206 | 1.00th=[ 247], 5.00th=[ 334], 10.00th=[ 392], 20.00th=[ 457], 00:09:43.206 | 30.00th=[ 510], 40.00th=[ 553], 50.00th=[ 611], 60.00th=[ 660], 00:09:43.206 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 807], 00:09:43.206 | 99.00th=[ 914], 99.50th=[ 947], 99.90th=[ 971], 99.95th=[ 971], 00:09:43.206 | 99.99th=[ 971] 00:09:43.206 bw ( KiB/s): min= 4096, max= 4096, per=31.56%, avg=4096.00, stdev= 0.00, samples=1 00:09:43.206 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:43.206 lat (usec) : 250=1.32%, 500=25.42%, 750=57.25%, 1000=12.62% 00:09:43.206 lat (msec) : 2=0.19%, 50=3.20% 00:09:43.206 cpu : usr=0.87%, sys=1.45%, ctx=532, majf=0, minf=2 00:09:43.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.207 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.207 job3: (groupid=0, jobs=1): err= 0: pid=3258070: Wed Nov 20 07:24:01 2024 00:09:43.207 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:43.207 slat (nsec): min=8363, max=63323, avg=27107.13, stdev=2424.98 00:09:43.207 clat (usec): min=727, max=1200, avg=980.21, stdev=85.32 00:09:43.207 lat (usec): min=754, max=1227, avg=1007.32, stdev=85.32 00:09:43.207 clat percentiles (usec): 00:09:43.207 | 1.00th=[ 758], 5.00th=[ 799], 10.00th=[ 865], 20.00th=[ 914], 00:09:43.207 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:09:43.207 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:09:43.207 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[ 1205], 99.95th=[ 1205], 00:09:43.207 | 99.99th=[ 1205] 00:09:43.207 write: IOPS=807, BW=3229KiB/s (3306kB/s)(3232KiB/1001msec); 0 zone resets 00:09:43.207 slat (usec): min=10, max=200, avg=30.45, stdev=13.21 00:09:43.207 clat (usec): min=117, max=1219, avg=557.21, stdev=156.81 00:09:43.207 lat (usec): min=130, max=1254, avg=587.65, stdev=163.14 00:09:43.207 clat percentiles (usec): 00:09:43.207 | 1.00th=[ 141], 5.00th=[ 231], 10.00th=[ 347], 20.00th=[ 449], 00:09:43.207 | 30.00th=[ 494], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 611], 00:09:43.207 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 734], 95.00th=[ 775], 00:09:43.207 | 99.00th=[ 848], 99.50th=[ 881], 99.90th=[ 1221], 99.95th=[ 1221], 00:09:43.207 | 99.99th=[ 1221] 00:09:43.207 bw ( KiB/s): min= 4096, max= 4096, per=31.56%, avg=4096.00, stdev= 0.00, samples=1 00:09:43.207 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:43.207 lat (usec) : 250=3.33%, 500=15.83%, 750=37.50%, 1000=25.30% 00:09:43.207 lat (msec) : 2=18.03% 00:09:43.207 cpu : usr=2.00%, sys=3.80%, ctx=1322, majf=0, minf=1 00:09:43.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.207 issued rwts: total=512,808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.207 00:09:43.207 Run status group 0 (all jobs): 00:09:43.207 READ: bw=6786KiB/s (6949kB/s), 73.2KiB/s-2474KiB/s (75.0kB/s-2533kB/s), io=7044KiB (7213kB), run=1001-1038msec 00:09:43.207 WRITE: bw=12.7MiB/s (13.3MB/s), 1973KiB/s-4092KiB/s (2020kB/s-4190kB/s), io=13.2MiB (13.8MB), run=1001-1038msec 00:09:43.207 00:09:43.207 Disk stats (read/write): 00:09:43.207 nvme0n1: ios=558/904, merge=0/0, ticks=488/294, in_queue=782, util=86.97% 00:09:43.207 nvme0n2: ios=535/874, merge=0/0, ticks=1254/368, in_queue=1622, util=88.48% 00:09:43.207 nvme0n3: ios=71/512, merge=0/0, ticks=712/299, in_queue=1011, util=92.73% 00:09:43.207 nvme0n4: ios=566/549, merge=0/0, ticks=623/285, in_queue=908, util=96.70% 00:09:43.207 07:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:43.207 [global] 00:09:43.207 thread=1 00:09:43.207 invalidate=1 00:09:43.207 rw=randwrite 00:09:43.207 time_based=1 00:09:43.207 runtime=1 00:09:43.207 ioengine=libaio 00:09:43.207 direct=1 00:09:43.207 bs=4096 00:09:43.207 iodepth=1 00:09:43.207 norandommap=0 00:09:43.207 numjobs=1 00:09:43.207 00:09:43.207 verify_dump=1 00:09:43.207 verify_backlog=512 00:09:43.207 verify_state_save=0 00:09:43.207 do_verify=1 00:09:43.207 verify=crc32c-intel 00:09:43.207 [job0] 00:09:43.207 filename=/dev/nvme0n1 00:09:43.207 [job1] 00:09:43.207 filename=/dev/nvme0n2 00:09:43.207 [job2] 00:09:43.207 filename=/dev/nvme0n3 00:09:43.207 [job3] 00:09:43.207 filename=/dev/nvme0n4 00:09:43.207 Could not set queue depth (nvme0n1) 00:09:43.207 Could not set queue depth (nvme0n2) 00:09:43.207 Could not set queue depth (nvme0n3) 00:09:43.207 Could not set queue depth (nvme0n4) 00:09:43.467 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.467 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.467 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.467 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.467 fio-3.35 00:09:43.467 Starting 4 threads 00:09:44.852 00:09:44.852 job0: (groupid=0, jobs=1): err= 0: pid=3258646: Wed Nov 20 07:24:02 2024 00:09:44.852 read: IOPS=18, BW=75.1KiB/s (76.9kB/s)(76.0KiB/1012msec) 00:09:44.852 slat (nsec): min=26005, max=26666, avg=26257.21, stdev=161.48 00:09:44.852 clat (usec): min=40864, max=41228, avg=40979.52, stdev=79.68 00:09:44.852 lat (usec): min=40890, max=41254, avg=41005.78, stdev=79.65 00:09:44.852 clat percentiles (usec): 00:09:44.852 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:44.852 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:44.852 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:44.852 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:44.852 | 99.99th=[41157] 00:09:44.852 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:09:44.852 slat (nsec): min=8501, max=56412, avg=26458.24, stdev=11053.49 00:09:44.852 clat (usec): min=120, max=731, avg=421.97, stdev=128.15 00:09:44.852 lat (usec): min=130, max=763, avg=448.43, stdev=131.65 00:09:44.852 clat percentiles (usec): 00:09:44.852 | 1.00th=[ 192], 5.00th=[ 223], 10.00th=[ 265], 20.00th=[ 322], 00:09:44.852 | 30.00th=[ 338], 40.00th=[ 367], 50.00th=[ 400], 60.00th=[ 449], 00:09:44.852 | 70.00th=[ 498], 80.00th=[ 545], 90.00th=[ 603], 95.00th=[ 652], 00:09:44.852 | 99.00th=[ 709], 99.50th=[ 717], 99.90th=[ 734], 99.95th=[ 734], 00:09:44.852 | 99.99th=[ 734] 00:09:44.852 bw ( KiB/s): min= 4096, max= 4096, per=40.48%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.852 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.852 lat (usec) : 250=8.29%, 500=60.08%, 750=28.06% 00:09:44.852 lat (msec) : 50=3.58% 00:09:44.852 cpu : usr=1.09%, sys=1.58%, ctx=531, majf=0, minf=1 00:09:44.852 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.853 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.853 job1: (groupid=0, jobs=1): err= 0: pid=3258663: Wed Nov 20 07:24:02 2024 00:09:44.853 read: IOPS=703, BW=2813KiB/s (2881kB/s)(2816KiB/1001msec) 00:09:44.853 slat (nsec): min=7021, max=62603, avg=24150.11, stdev=7552.69 00:09:44.853 clat (usec): min=423, max=953, avg=733.51, stdev=95.34 00:09:44.853 lat (usec): min=449, max=979, avg=757.66, stdev=97.42 00:09:44.853 clat percentiles (usec): 00:09:44.853 | 1.00th=[ 486], 5.00th=[ 578], 10.00th=[ 603], 20.00th=[ 644], 00:09:44.853 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 742], 60.00th=[ 775], 00:09:44.853 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 848], 95.00th=[ 865], 00:09:44.853 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 955], 99.95th=[ 955], 00:09:44.853 | 99.99th=[ 955] 00:09:44.853 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:44.853 slat (nsec): min=9372, max=64471, avg=29622.63, stdev=9902.08 00:09:44.853 clat (usec): min=124, max=840, avg=413.15, stdev=126.82 00:09:44.853 lat (usec): min=135, max=874, avg=442.77, stdev=131.16 00:09:44.853 clat percentiles (usec): 00:09:44.853 | 1.00th=[ 137], 5.00th=[ 165], 10.00th=[ 235], 20.00th=[ 322], 00:09:44.853 | 30.00th=[ 351], 40.00th=[ 383], 50.00th=[ 416], 60.00th=[ 441], 00:09:44.853 | 70.00th=[ 482], 80.00th=[ 515], 90.00th=[ 562], 95.00th=[ 627], 00:09:44.853 | 99.00th=[ 725], 99.50th=[ 742], 99.90th=[ 816], 99.95th=[ 840], 00:09:44.853 | 99.99th=[ 840] 00:09:44.853 bw ( KiB/s): min= 4096, max= 4096, per=40.48%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.853 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.853 lat (usec) : 250=6.71%, 500=39.00%, 750=34.84%, 1000=19.44% 00:09:44.853 cpu : usr=2.70%, sys=4.70%, ctx=1729, majf=0, minf=1 00:09:44.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.853 issued rwts: total=704,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.853 job2: (groupid=0, jobs=1): err= 0: pid=3258676: Wed Nov 20 07:24:02 2024 00:09:44.853 read: IOPS=18, BW=75.5KiB/s (77.3kB/s)(76.0KiB/1007msec) 00:09:44.853 slat (nsec): min=24748, max=31454, avg=27316.11, stdev=2182.62 00:09:44.853 clat (usec): min=40825, max=41109, avg=40970.16, stdev=80.73 00:09:44.853 lat (usec): min=40852, max=41139, avg=40997.48, stdev=80.46 00:09:44.853 clat percentiles (usec): 00:09:44.853 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:44.853 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:44.853 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:44.853 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:44.853 | 99.99th=[41157] 00:09:44.853 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:09:44.853 slat (nsec): min=9713, max=58863, avg=30033.94, stdev=11146.64 00:09:44.853 clat (usec): min=138, max=2365, avg=406.94, stdev=149.80 00:09:44.853 lat (usec): min=149, max=2376, avg=436.97, stdev=151.89 00:09:44.853 clat percentiles (usec): 00:09:44.853 | 1.00th=[ 184], 5.00th=[ 227], 10.00th=[ 265], 20.00th=[ 302], 00:09:44.853 | 30.00th=[ 330], 40.00th=[ 355], 50.00th=[ 388], 60.00th=[ 433], 00:09:44.853 | 70.00th=[ 465], 80.00th=[ 502], 90.00th=[ 578], 95.00th=[ 619], 00:09:44.853 | 99.00th=[ 701], 99.50th=[ 709], 99.90th=[ 2376], 99.95th=[ 2376], 00:09:44.853 | 99.99th=[ 2376] 00:09:44.853 bw ( KiB/s): min= 4096, max= 4096, per=40.48%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.853 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.853 lat (usec) : 250=8.47%, 500=68.17%, 750=19.40% 00:09:44.853 lat (msec) : 2=0.19%, 4=0.19%, 50=3.58% 00:09:44.853 cpu : usr=1.09%, sys=1.29%, ctx=532, majf=0, minf=1 00:09:44.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.853 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.853 job3: (groupid=0, jobs=1): err= 0: pid=3258677: Wed Nov 20 07:24:02 2024 00:09:44.853 read: IOPS=16, BW=67.4KiB/s (69.0kB/s)(68.0KiB/1009msec) 00:09:44.853 slat (nsec): min=10706, max=31254, avg=25619.53, stdev=5281.38 00:09:44.853 clat (usec): min=29318, max=42022, avg=41165.71, stdev=3058.57 00:09:44.853 lat (usec): min=29332, max=42052, avg=41191.33, stdev=3061.95 00:09:44.853 clat percentiles (usec): 00:09:44.853 | 1.00th=[29230], 5.00th=[29230], 10.00th=[41157], 20.00th=[41681], 00:09:44.853 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:44.853 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:44.853 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:44.853 | 99.99th=[42206] 00:09:44.853 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:09:44.853 slat (nsec): min=9843, max=70943, avg=28303.88, stdev=10827.47 00:09:44.853 clat (usec): min=114, max=892, avg=566.02, stdev=184.22 00:09:44.853 lat (usec): min=125, max=926, avg=594.33, stdev=191.36 00:09:44.853 clat percentiles (usec): 00:09:44.853 | 1.00th=[ 119], 5.00th=[ 129], 10.00th=[ 157], 20.00th=[ 478], 00:09:44.853 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 644], 00:09:44.853 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 775], 00:09:44.853 | 99.00th=[ 840], 99.50th=[ 889], 99.90th=[ 889], 99.95th=[ 889], 00:09:44.853 | 99.99th=[ 889] 00:09:44.853 bw ( KiB/s): min= 4096, max= 4096, per=40.48%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.853 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.853 lat (usec) : 250=10.78%, 500=11.91%, 750=66.35%, 1000=7.75% 00:09:44.853 lat (msec) : 50=3.21% 00:09:44.853 cpu : usr=0.50%, sys=1.59%, ctx=532, majf=0, minf=1 00:09:44.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.853 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.853 00:09:44.853 Run status group 0 (all jobs): 00:09:44.853 READ: bw=3000KiB/s (3072kB/s), 67.4KiB/s-2813KiB/s (69.0kB/s-2881kB/s), io=3036KiB (3109kB), run=1001-1012msec 00:09:44.853 WRITE: bw=9.88MiB/s (10.4MB/s), 2024KiB/s-4092KiB/s (2072kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1012msec 00:09:44.853 00:09:44.853 Disk stats (read/write): 00:09:44.853 nvme0n1: ios=64/512, merge=0/0, ticks=643/164, in_queue=807, util=88.28% 00:09:44.853 nvme0n2: ios=535/996, merge=0/0, ticks=1283/389, in_queue=1672, util=92.87% 00:09:44.853 nvme0n3: ios=56/512, merge=0/0, ticks=1496/203, in_queue=1699, util=99.79% 00:09:44.853 nvme0n4: ios=61/512, merge=0/0, ticks=849/289, in_queue=1138, util=100.00% 00:09:44.853 07:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:44.853 [global] 00:09:44.853 thread=1 00:09:44.853 invalidate=1 00:09:44.853 rw=write 00:09:44.853 time_based=1 00:09:44.853 runtime=1 00:09:44.853 ioengine=libaio 00:09:44.853 direct=1 00:09:44.853 bs=4096 00:09:44.853 iodepth=128 00:09:44.853 norandommap=0 00:09:44.853 numjobs=1 00:09:44.853 00:09:44.853 verify_dump=1 00:09:44.853 verify_backlog=512 00:09:44.853 verify_state_save=0 00:09:44.853 do_verify=1 00:09:44.853 verify=crc32c-intel 00:09:44.853 [job0] 00:09:44.853 filename=/dev/nvme0n1 00:09:44.853 [job1] 00:09:44.853 filename=/dev/nvme0n2 00:09:44.853 [job2] 00:09:44.853 filename=/dev/nvme0n3 00:09:44.853 [job3] 00:09:44.853 filename=/dev/nvme0n4 00:09:44.853 Could not set queue depth (nvme0n1) 00:09:44.853 Could not set queue depth (nvme0n2) 00:09:44.853 Could not set queue depth (nvme0n3) 00:09:44.853 Could not set queue depth (nvme0n4) 00:09:45.114 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.114 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.114 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.114 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.114 fio-3.35 00:09:45.114 Starting 4 threads 00:09:46.498 00:09:46.498 job0: (groupid=0, jobs=1): err= 0: pid=3259153: Wed Nov 20 07:24:04 2024 00:09:46.498 read: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec) 00:09:46.498 slat (nsec): min=958, max=6637.7k, avg=61269.26, stdev=423201.44 00:09:46.498 clat (usec): min=3143, max=17699, avg=8006.81, stdev=1911.76 00:09:46.498 lat (usec): min=3151, max=17738, avg=8068.08, stdev=1941.09 00:09:46.498 clat percentiles (usec): 00:09:46.498 | 1.00th=[ 3425], 5.00th=[ 5473], 10.00th=[ 6194], 20.00th=[ 6587], 00:09:46.498 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7635], 60.00th=[ 8094], 00:09:46.498 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[10683], 95.00th=[11863], 00:09:46.498 | 99.00th=[13829], 99.50th=[15533], 99.90th=[15795], 99.95th=[16319], 00:09:46.498 | 99.99th=[17695] 00:09:46.498 write: IOPS=8429, BW=32.9MiB/s (34.5MB/s)(33.0MiB/1003msec); 0 zone resets 00:09:46.498 slat (nsec): min=1638, max=7173.6k, avg=52144.69, stdev=315044.90 00:09:46.498 clat (usec): min=1311, max=18790, avg=7193.89, stdev=2270.44 00:09:46.498 lat (usec): min=1321, max=18794, avg=7246.04, stdev=2292.07 00:09:46.498 clat percentiles (usec): 00:09:46.498 | 1.00th=[ 2802], 5.00th=[ 4359], 10.00th=[ 4621], 20.00th=[ 5735], 00:09:46.498 | 30.00th=[ 6456], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7177], 00:09:46.498 | 70.00th=[ 7373], 80.00th=[ 7701], 90.00th=[ 9896], 95.00th=[11469], 00:09:46.498 | 99.00th=[16581], 99.50th=[17957], 99.90th=[18744], 99.95th=[18744], 00:09:46.498 | 99.99th=[18744] 00:09:46.498 bw ( KiB/s): min=32816, max=33808, per=33.01%, avg=33312.00, stdev=701.45, samples=2 00:09:46.498 iops : min= 8204, max= 8452, avg=8328.00, stdev=175.36, samples=2 00:09:46.498 lat (msec) : 2=0.06%, 4=2.47%, 10=86.09%, 20=11.38% 00:09:46.498 cpu : usr=5.19%, sys=8.98%, ctx=699, majf=0, minf=2 00:09:46.498 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:46.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.498 issued rwts: total=8192,8455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.498 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.498 job1: (groupid=0, jobs=1): err= 0: pid=3259166: Wed Nov 20 07:24:04 2024 00:09:46.498 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:09:46.498 slat (nsec): min=950, max=18425k, avg=91292.20, stdev=684691.45 00:09:46.498 clat (usec): min=3949, max=72327, avg=10732.68, stdev=6835.74 00:09:46.498 lat (usec): min=4180, max=72335, avg=10823.97, stdev=6926.34 00:09:46.498 clat percentiles (usec): 00:09:46.498 | 1.00th=[ 5342], 5.00th=[ 6456], 10.00th=[ 7308], 20.00th=[ 7767], 00:09:46.498 | 30.00th=[ 7898], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:09:46.498 | 70.00th=[ 9372], 80.00th=[12387], 90.00th=[16581], 95.00th=[22938], 00:09:46.498 | 99.00th=[40633], 99.50th=[60031], 99.90th=[71828], 99.95th=[71828], 00:09:46.498 | 99.99th=[71828] 00:09:46.498 write: IOPS=4512, BW=17.6MiB/s (18.5MB/s)(17.8MiB/1010msec); 0 zone resets 00:09:46.498 slat (nsec): min=1645, max=8457.2k, avg=131891.80, stdev=719576.49 00:09:46.498 clat (usec): min=1216, max=84238, avg=18479.41, stdev=21177.69 00:09:46.498 lat (usec): min=1226, max=84246, avg=18611.30, stdev=21318.18 00:09:46.498 clat percentiles (usec): 00:09:46.498 | 1.00th=[ 2606], 5.00th=[ 4293], 10.00th=[ 5014], 20.00th=[ 6390], 00:09:46.498 | 30.00th=[ 7046], 40.00th=[ 7439], 50.00th=[ 8356], 60.00th=[11338], 00:09:46.498 | 70.00th=[14615], 80.00th=[23462], 90.00th=[56886], 95.00th=[76022], 00:09:46.498 | 99.00th=[82314], 99.50th=[82314], 99.90th=[84411], 99.95th=[84411], 00:09:46.498 | 99.99th=[84411] 00:09:46.498 bw ( KiB/s): min=16384, max=19056, per=17.56%, avg=17720.00, stdev=1889.39, samples=2 00:09:46.498 iops : min= 4096, max= 4764, avg=4430.00, stdev=472.35, samples=2 00:09:46.498 lat (msec) : 2=0.35%, 4=0.79%, 10=64.73%, 20=19.52%, 50=8.12% 00:09:46.498 lat (msec) : 100=6.49% 00:09:46.498 cpu : usr=3.27%, sys=5.35%, ctx=353, majf=0, minf=1 00:09:46.498 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:46.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.498 issued rwts: total=4096,4558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.498 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.498 job2: (groupid=0, jobs=1): err= 0: pid=3259183: Wed Nov 20 07:24:04 2024 00:09:46.498 read: IOPS=5204, BW=20.3MiB/s (21.3MB/s)(20.5MiB/1008msec) 00:09:46.498 slat (nsec): min=965, max=7815.6k, avg=78578.73, stdev=546438.72 00:09:46.498 clat (usec): min=2138, max=22595, avg=10022.94, stdev=3077.08 00:09:46.498 lat (usec): min=4291, max=22597, avg=10101.52, stdev=3110.91 00:09:46.498 clat percentiles (usec): 00:09:46.498 | 1.00th=[ 6194], 5.00th=[ 6718], 10.00th=[ 7111], 20.00th=[ 8225], 00:09:46.498 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9241], 00:09:46.498 | 70.00th=[10028], 80.00th=[12125], 90.00th=[14746], 95.00th=[15795], 00:09:46.498 | 99.00th=[22152], 99.50th=[22152], 99.90th=[22414], 99.95th=[22676], 00:09:46.498 | 99.99th=[22676] 00:09:46.498 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:09:46.498 slat (nsec): min=1684, max=7668.3k, avg=99702.75, stdev=613044.63 00:09:46.498 clat (usec): min=1131, max=75658, avg=13365.35, stdev=13571.99 00:09:46.498 lat (usec): min=1140, max=75666, avg=13465.06, stdev=13658.38 00:09:46.498 clat percentiles (usec): 00:09:46.498 | 1.00th=[ 2999], 5.00th=[ 4621], 10.00th=[ 5145], 20.00th=[ 6063], 00:09:46.498 | 30.00th=[ 6980], 40.00th=[ 7898], 50.00th=[ 8356], 60.00th=[ 9896], 00:09:46.498 | 70.00th=[11863], 80.00th=[14615], 90.00th=[27919], 95.00th=[46924], 00:09:46.498 | 99.00th=[69731], 99.50th=[71828], 99.90th=[76022], 99.95th=[76022], 00:09:46.498 | 99.99th=[76022] 00:09:46.498 bw ( KiB/s): min=18688, max=26352, per=22.31%, avg=22520.00, stdev=5419.27, samples=2 00:09:46.498 iops : min= 4672, max= 6588, avg=5630.00, stdev=1354.82, samples=2 00:09:46.498 lat (msec) : 2=0.14%, 4=0.91%, 10=64.02%, 20=27.21%, 50=5.39% 00:09:46.498 lat (msec) : 100=2.33% 00:09:46.498 cpu : usr=4.87%, sys=5.46%, ctx=403, majf=0, minf=2 00:09:46.498 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:46.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.498 issued rwts: total=5246,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.498 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.498 job3: (groupid=0, jobs=1): err= 0: pid=3259189: Wed Nov 20 07:24:04 2024 00:09:46.498 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:09:46.498 slat (nsec): min=946, max=6527.3k, avg=73165.80, stdev=457838.61 00:09:46.498 clat (usec): min=5082, max=28906, avg=9168.63, stdev=2190.18 00:09:46.498 lat (usec): min=5580, max=32881, avg=9241.79, stdev=2233.15 00:09:46.498 clat percentiles (usec): 00:09:46.498 | 1.00th=[ 6063], 5.00th=[ 6849], 10.00th=[ 7832], 20.00th=[ 8225], 00:09:46.498 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8848], 00:09:46.499 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[11076], 95.00th=[12649], 00:09:46.499 | 99.00th=[20317], 99.50th=[21103], 99.90th=[27657], 99.95th=[27657], 00:09:46.499 | 99.99th=[28967] 00:09:46.499 write: IOPS=6823, BW=26.7MiB/s (27.9MB/s)(26.7MiB/1002msec); 0 zone resets 00:09:46.499 slat (nsec): min=1581, max=4669.6k, avg=70420.58, stdev=306958.13 00:09:46.499 clat (usec): min=674, max=30407, avg=9666.53, stdev=4111.94 00:09:46.499 lat (usec): min=1183, max=30410, avg=9736.95, stdev=4141.17 00:09:46.499 clat percentiles (usec): 00:09:46.499 | 1.00th=[ 4555], 5.00th=[ 6587], 10.00th=[ 7439], 20.00th=[ 7898], 00:09:46.499 | 30.00th=[ 8029], 40.00th=[ 8094], 50.00th=[ 8225], 60.00th=[ 8291], 00:09:46.499 | 70.00th=[ 8586], 80.00th=[10683], 90.00th=[15139], 95.00th=[19268], 00:09:46.499 | 99.00th=[26870], 99.50th=[27657], 99.90th=[30278], 99.95th=[30278], 00:09:46.499 | 99.99th=[30278] 00:09:46.499 bw ( KiB/s): min=26616, max=27064, per=26.60%, avg=26840.00, stdev=316.78, samples=2 00:09:46.499 iops : min= 6654, max= 6766, avg=6710.00, stdev=79.20, samples=2 00:09:46.499 lat (usec) : 750=0.01% 00:09:46.499 lat (msec) : 2=0.14%, 4=0.25%, 10=80.19%, 20=16.50%, 50=2.91% 00:09:46.499 cpu : usr=3.70%, sys=6.29%, ctx=905, majf=0, minf=1 00:09:46.499 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:46.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.499 issued rwts: total=6656,6837,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.499 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.499 00:09:46.499 Run status group 0 (all jobs): 00:09:46.499 READ: bw=93.6MiB/s (98.1MB/s), 15.8MiB/s-31.9MiB/s (16.6MB/s-33.5MB/s), io=94.5MiB (99.1MB), run=1002-1010msec 00:09:46.499 WRITE: bw=98.6MiB/s (103MB/s), 17.6MiB/s-32.9MiB/s (18.5MB/s-34.5MB/s), io=99.5MiB (104MB), run=1002-1010msec 00:09:46.499 00:09:46.499 Disk stats (read/write): 00:09:46.499 nvme0n1: ios=6760/7168, merge=0/0, ticks=44288/38472, in_queue=82760, util=99.70% 00:09:46.499 nvme0n2: ios=3589/3584, merge=0/0, ticks=37185/64603, in_queue=101788, util=88.39% 00:09:46.499 nvme0n3: ios=4608/4807, merge=0/0, ticks=43155/55383, in_queue=98538, util=88.56% 00:09:46.499 nvme0n4: ios=5434/5632, merge=0/0, ticks=24712/27591, in_queue=52303, util=89.60% 00:09:46.499 07:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:46.499 [global] 00:09:46.499 thread=1 00:09:46.499 invalidate=1 00:09:46.499 rw=randwrite 00:09:46.499 time_based=1 00:09:46.499 runtime=1 00:09:46.499 ioengine=libaio 00:09:46.499 direct=1 00:09:46.499 bs=4096 00:09:46.499 iodepth=128 00:09:46.499 norandommap=0 00:09:46.499 numjobs=1 00:09:46.499 00:09:46.499 verify_dump=1 00:09:46.499 verify_backlog=512 00:09:46.499 verify_state_save=0 00:09:46.499 do_verify=1 00:09:46.499 verify=crc32c-intel 00:09:46.499 [job0] 00:09:46.499 filename=/dev/nvme0n1 00:09:46.499 [job1] 00:09:46.499 filename=/dev/nvme0n2 00:09:46.499 [job2] 00:09:46.499 filename=/dev/nvme0n3 00:09:46.499 [job3] 00:09:46.499 filename=/dev/nvme0n4 00:09:46.499 Could not set queue depth (nvme0n1) 00:09:46.499 Could not set queue depth (nvme0n2) 00:09:46.499 Could not set queue depth (nvme0n3) 00:09:46.499 Could not set queue depth (nvme0n4) 00:09:46.759 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.759 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.759 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.759 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.759 fio-3.35 00:09:46.759 Starting 4 threads 00:09:48.141 00:09:48.141 job0: (groupid=0, jobs=1): err= 0: pid=3259661: Wed Nov 20 07:24:06 2024 00:09:48.141 read: IOPS=7656, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec) 00:09:48.141 slat (nsec): min=894, max=5600.2k, avg=61400.33, stdev=394936.30 00:09:48.141 clat (usec): min=1016, max=15691, avg=7714.80, stdev=1173.10 00:09:48.141 lat (usec): min=3862, max=15705, avg=7776.20, stdev=1220.05 00:09:48.141 clat percentiles (usec): 00:09:48.141 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[ 7111], 00:09:48.141 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7701], 00:09:48.141 | 70.00th=[ 7898], 80.00th=[ 8225], 90.00th=[ 9241], 95.00th=[10028], 00:09:48.141 | 99.00th=[11469], 99.50th=[11863], 99.90th=[13173], 99.95th=[13829], 00:09:48.141 | 99.99th=[15664] 00:09:48.141 write: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec); 0 zone resets 00:09:48.141 slat (nsec): min=1482, max=25563k, avg=65177.62, stdev=550720.10 00:09:48.141 clat (usec): min=1252, max=62486, avg=8738.36, stdev=6246.86 00:09:48.141 lat (usec): min=2487, max=62517, avg=8803.54, stdev=6301.22 00:09:48.141 clat percentiles (usec): 00:09:48.141 | 1.00th=[ 4490], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 6915], 00:09:48.141 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7177], 60.00th=[ 7242], 00:09:48.141 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[10028], 95.00th=[19006], 00:09:48.141 | 99.00th=[40109], 99.50th=[49021], 99.90th=[49546], 99.95th=[49546], 00:09:48.141 | 99.99th=[62653] 00:09:48.141 bw ( KiB/s): min=28584, max=32856, per=31.23%, avg=30720.00, stdev=3020.76, samples=2 00:09:48.141 iops : min= 7146, max= 8214, avg=7680.00, stdev=755.19, samples=2 00:09:48.141 lat (msec) : 2=0.01%, 4=0.09%, 10=92.02%, 20=5.40%, 50=2.46% 00:09:48.141 lat (msec) : 100=0.01% 00:09:48.141 cpu : usr=3.90%, sys=5.00%, ctx=923, majf=0, minf=1 00:09:48.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:48.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.141 issued rwts: total=7672,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.141 job1: (groupid=0, jobs=1): err= 0: pid=3259670: Wed Nov 20 07:24:06 2024 00:09:48.141 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:09:48.141 slat (nsec): min=988, max=14367k, avg=118853.37, stdev=822674.08 00:09:48.141 clat (usec): min=3948, max=76766, avg=14019.64, stdev=9091.47 00:09:48.141 lat (usec): min=3954, max=76773, avg=14138.50, stdev=9174.53 00:09:48.141 clat percentiles (usec): 00:09:48.141 | 1.00th=[ 6521], 5.00th=[ 7111], 10.00th=[ 7570], 20.00th=[ 8291], 00:09:48.141 | 30.00th=[ 8848], 40.00th=[ 9896], 50.00th=[11994], 60.00th=[14222], 00:09:48.141 | 70.00th=[14877], 80.00th=[16581], 90.00th=[20317], 95.00th=[28705], 00:09:48.141 | 99.00th=[63701], 99.50th=[69731], 99.90th=[77071], 99.95th=[77071], 00:09:48.141 | 99.99th=[77071] 00:09:48.141 write: IOPS=4199, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1007msec); 0 zone resets 00:09:48.141 slat (nsec): min=1596, max=11465k, avg=115929.23, stdev=654930.29 00:09:48.141 clat (usec): min=2412, max=76737, avg=16618.02, stdev=11185.92 00:09:48.141 lat (usec): min=2417, max=76746, avg=16733.95, stdev=11240.83 00:09:48.141 clat percentiles (usec): 00:09:48.141 | 1.00th=[ 3884], 5.00th=[ 5342], 10.00th=[ 7373], 20.00th=[ 9634], 00:09:48.141 | 30.00th=[11600], 40.00th=[13435], 50.00th=[14615], 60.00th=[14746], 00:09:48.141 | 70.00th=[15139], 80.00th=[19792], 90.00th=[28181], 95.00th=[41157], 00:09:48.141 | 99.00th=[64750], 99.50th=[68682], 99.90th=[70779], 99.95th=[70779], 00:09:48.141 | 99.99th=[77071] 00:09:48.141 bw ( KiB/s): min=16384, max=16496, per=16.71%, avg=16440.00, stdev=79.20, samples=2 00:09:48.141 iops : min= 4096, max= 4124, avg=4110.00, stdev=19.80, samples=2 00:09:48.141 lat (msec) : 4=0.60%, 10=30.07%, 20=54.19%, 50=12.77%, 100=2.38% 00:09:48.141 cpu : usr=3.48%, sys=4.37%, ctx=412, majf=0, minf=1 00:09:48.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:48.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.141 issued rwts: total=4096,4229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.141 job2: (groupid=0, jobs=1): err= 0: pid=3259688: Wed Nov 20 07:24:06 2024 00:09:48.141 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:09:48.141 slat (nsec): min=997, max=16693k, avg=123291.52, stdev=891179.00 00:09:48.141 clat (usec): min=4627, max=54305, avg=14801.91, stdev=7041.87 00:09:48.141 lat (usec): min=4632, max=54315, avg=14925.20, stdev=7120.59 00:09:48.141 clat percentiles (usec): 00:09:48.141 | 1.00th=[ 6456], 5.00th=[ 8029], 10.00th=[ 8717], 20.00th=[ 8979], 00:09:48.141 | 30.00th=[11076], 40.00th=[12780], 50.00th=[14484], 60.00th=[14877], 00:09:48.141 | 70.00th=[16319], 80.00th=[18482], 90.00th=[20055], 95.00th=[26346], 00:09:48.141 | 99.00th=[50594], 99.50th=[52691], 99.90th=[54264], 99.95th=[54264], 00:09:48.141 | 99.99th=[54264] 00:09:48.141 write: IOPS=4423, BW=17.3MiB/s (18.1MB/s)(17.3MiB/1003msec); 0 zone resets 00:09:48.141 slat (nsec): min=1619, max=13065k, avg=106372.05, stdev=599208.50 00:09:48.141 clat (usec): min=1950, max=64330, avg=14999.86, stdev=9196.22 00:09:48.141 lat (usec): min=3036, max=64334, avg=15106.23, stdev=9248.65 00:09:48.141 clat percentiles (usec): 00:09:48.141 | 1.00th=[ 4228], 5.00th=[ 5407], 10.00th=[ 7177], 20.00th=[ 8225], 00:09:48.141 | 30.00th=[10290], 40.00th=[11731], 50.00th=[14091], 60.00th=[14746], 00:09:48.141 | 70.00th=[14877], 80.00th=[17957], 90.00th=[25297], 95.00th=[31851], 00:09:48.141 | 99.00th=[57934], 99.50th=[62653], 99.90th=[64226], 99.95th=[64226], 00:09:48.141 | 99.99th=[64226] 00:09:48.141 bw ( KiB/s): min=16384, max=18096, per=17.53%, avg=17240.00, stdev=1210.57, samples=2 00:09:48.141 iops : min= 4096, max= 4524, avg=4310.00, stdev=302.64, samples=2 00:09:48.141 lat (msec) : 2=0.01%, 4=0.30%, 10=28.24%, 20=57.34%, 50=12.72% 00:09:48.141 lat (msec) : 100=1.38% 00:09:48.141 cpu : usr=2.59%, sys=4.99%, ctx=412, majf=0, minf=1 00:09:48.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:48.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.141 issued rwts: total=4096,4437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.141 job3: (groupid=0, jobs=1): err= 0: pid=3259694: Wed Nov 20 07:24:06 2024 00:09:48.141 read: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec) 00:09:48.141 slat (nsec): min=999, max=7524.0k, avg=64895.93, stdev=461097.43 00:09:48.141 clat (usec): min=2534, max=15623, avg=8403.15, stdev=2030.47 00:09:48.141 lat (usec): min=2537, max=15633, avg=8468.04, stdev=2058.90 00:09:48.141 clat percentiles (usec): 00:09:48.141 | 1.00th=[ 4359], 5.00th=[ 5604], 10.00th=[ 6390], 20.00th=[ 6915], 00:09:48.141 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8225], 00:09:48.141 | 70.00th=[ 8979], 80.00th=[10028], 90.00th=[11076], 95.00th=[12780], 00:09:48.141 | 99.00th=[14484], 99.50th=[14877], 99.90th=[15139], 99.95th=[15401], 00:09:48.141 | 99.99th=[15664] 00:09:48.141 write: IOPS=8389, BW=32.8MiB/s (34.4MB/s)(32.9MiB/1003msec); 0 zone resets 00:09:48.141 slat (nsec): min=1599, max=6581.7k, avg=50384.59, stdev=334981.04 00:09:48.141 clat (usec): min=1172, max=15063, avg=6943.69, stdev=1810.14 00:09:48.141 lat (usec): min=1181, max=15065, avg=6994.08, stdev=1824.21 00:09:48.141 clat percentiles (usec): 00:09:48.141 | 1.00th=[ 2704], 5.00th=[ 4015], 10.00th=[ 4228], 20.00th=[ 5014], 00:09:48.141 | 30.00th=[ 6063], 40.00th=[ 6718], 50.00th=[ 7439], 60.00th=[ 7767], 00:09:48.141 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8586], 95.00th=[10028], 00:09:48.141 | 99.00th=[10814], 99.50th=[10814], 99.90th=[14746], 99.95th=[14877], 00:09:48.141 | 99.99th=[15008] 00:09:48.141 bw ( KiB/s): min=30352, max=35952, per=33.71%, avg=33152.00, stdev=3959.80, samples=2 00:09:48.141 iops : min= 7588, max= 8988, avg=8288.00, stdev=989.95, samples=2 00:09:48.141 lat (msec) : 2=0.09%, 4=2.61%, 10=85.21%, 20=12.09% 00:09:48.142 cpu : usr=6.09%, sys=8.28%, ctx=655, majf=0, minf=1 00:09:48.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:48.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.142 issued rwts: total=8192,8415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.142 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.142 00:09:48.142 Run status group 0 (all jobs): 00:09:48.142 READ: bw=93.3MiB/s (97.8MB/s), 15.9MiB/s-31.9MiB/s (16.7MB/s-33.5MB/s), io=94.0MiB (98.5MB), run=1002-1007msec 00:09:48.142 WRITE: bw=96.0MiB/s (101MB/s), 16.4MiB/s-32.8MiB/s (17.2MB/s-34.4MB/s), io=96.7MiB (101MB), run=1002-1007msec 00:09:48.142 00:09:48.142 Disk stats (read/write): 00:09:48.142 nvme0n1: ios=6194/6443, merge=0/0, ticks=23718/28109, in_queue=51827, util=92.48% 00:09:48.142 nvme0n2: ios=3239/3584, merge=0/0, ticks=43808/58957, in_queue=102765, util=87.77% 00:09:48.142 nvme0n3: ios=3480/3584, merge=0/0, ticks=48688/53560, in_queue=102248, util=97.05% 00:09:48.142 nvme0n4: ios=6807/7168, merge=0/0, ticks=54285/46767, in_queue=101052, util=89.56% 00:09:48.142 07:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:48.142 07:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3259908 00:09:48.142 07:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:48.142 07:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:48.142 [global] 00:09:48.142 thread=1 00:09:48.142 invalidate=1 00:09:48.142 rw=read 00:09:48.142 time_based=1 00:09:48.142 runtime=10 00:09:48.142 ioengine=libaio 00:09:48.142 direct=1 00:09:48.142 bs=4096 00:09:48.142 iodepth=1 00:09:48.142 norandommap=1 00:09:48.142 numjobs=1 00:09:48.142 00:09:48.142 [job0] 00:09:48.142 filename=/dev/nvme0n1 00:09:48.142 [job1] 00:09:48.142 filename=/dev/nvme0n2 00:09:48.142 [job2] 00:09:48.142 filename=/dev/nvme0n3 00:09:48.142 [job3] 00:09:48.142 filename=/dev/nvme0n4 00:09:48.142 Could not set queue depth (nvme0n1) 00:09:48.142 Could not set queue depth (nvme0n2) 00:09:48.142 Could not set queue depth (nvme0n3) 00:09:48.142 Could not set queue depth (nvme0n4) 00:09:48.402 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.402 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.402 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.402 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.402 fio-3.35 00:09:48.402 Starting 4 threads 00:09:51.021 07:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:51.281 07:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:51.281 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=614400, buflen=4096 00:09:51.281 fio: pid=3260174, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:51.542 07:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.542 07:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:51.542 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1753088, buflen=4096 00:09:51.542 fio: pid=3260168, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:51.542 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11317248, buflen=4096 00:09:51.542 fio: pid=3260142, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:51.542 07:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.542 07:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:51.804 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12124160, buflen=4096 00:09:51.804 fio: pid=3260151, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:51.804 07:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.804 07:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:51.804 00:09:51.804 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3260142: Wed Nov 20 07:24:09 2024 00:09:51.804 read: IOPS=931, BW=3724KiB/s (3813kB/s)(10.8MiB/2968msec) 00:09:51.804 slat (usec): min=6, max=17144, avg=47.64, stdev=548.67 00:09:51.804 clat (usec): min=439, max=4480, avg=1012.05, stdev=176.03 00:09:51.804 lat (usec): min=470, max=18210, avg=1059.69, stdev=580.64 00:09:51.804 clat percentiles (usec): 00:09:51.804 | 1.00th=[ 627], 5.00th=[ 717], 10.00th=[ 775], 20.00th=[ 889], 00:09:51.804 | 30.00th=[ 988], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1074], 00:09:51.804 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:09:51.804 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 3490], 99.95th=[ 4424], 00:09:51.804 | 99.99th=[ 4490] 00:09:51.804 bw ( KiB/s): min= 3576, max= 4696, per=48.25%, avg=3856.00, stdev=473.76, samples=5 00:09:51.804 iops : min= 894, max= 1174, avg=964.00, stdev=118.44, samples=5 00:09:51.804 lat (usec) : 500=0.07%, 750=7.31%, 1000=24.60% 00:09:51.804 lat (msec) : 2=67.87%, 4=0.04%, 10=0.07% 00:09:51.804 cpu : usr=1.92%, sys=3.44%, ctx=2768, majf=0, minf=2 00:09:51.804 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.804 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.804 issued rwts: total=2764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.804 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.804 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3260151: Wed Nov 20 07:24:09 2024 00:09:51.804 read: IOPS=938, BW=3754KiB/s (3844kB/s)(11.6MiB/3154msec) 00:09:51.804 slat (usec): min=6, max=33825, avg=63.57, stdev=954.13 00:09:51.804 clat (usec): min=178, max=1228, avg=988.10, stdev=74.92 00:09:51.804 lat (usec): min=204, max=34842, avg=1051.67, stdev=958.37 00:09:51.804 clat percentiles (usec): 00:09:51.804 | 1.00th=[ 775], 5.00th=[ 848], 10.00th=[ 889], 20.00th=[ 938], 00:09:51.804 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 996], 60.00th=[ 1012], 00:09:51.804 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1090], 00:09:51.804 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[ 1221], 99.95th=[ 1221], 00:09:51.804 | 99.99th=[ 1221] 00:09:51.804 bw ( KiB/s): min= 3219, max= 3968, per=47.54%, avg=3799.17, stdev=287.47, samples=6 00:09:51.804 iops : min= 804, max= 992, avg=949.67, stdev=72.17, samples=6 00:09:51.804 lat (usec) : 250=0.03%, 750=0.51%, 1000=50.56% 00:09:51.804 lat (msec) : 2=48.87% 00:09:51.804 cpu : usr=0.98%, sys=2.85%, ctx=2967, majf=0, minf=2 00:09:51.804 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.804 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.804 issued rwts: total=2961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.804 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.804 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3260168: Wed Nov 20 07:24:09 2024 00:09:51.804 read: IOPS=151, BW=606KiB/s (621kB/s)(1712KiB/2823msec) 00:09:51.804 slat (nsec): min=7282, max=46860, avg=26569.12, stdev=3029.80 00:09:51.804 clat (usec): min=752, max=42129, avg=6510.45, stdev=13899.41 00:09:51.804 lat (usec): min=772, max=42157, avg=6537.02, stdev=13899.91 00:09:51.804 clat percentiles (usec): 00:09:51.804 | 1.00th=[ 783], 5.00th=[ 873], 10.00th=[ 922], 20.00th=[ 979], 00:09:51.804 | 30.00th=[ 1004], 40.00th=[ 1020], 50.00th=[ 1037], 60.00th=[ 1057], 00:09:51.804 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[41157], 95.00th=[41681], 00:09:51.804 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:51.804 | 99.99th=[42206] 00:09:51.805 bw ( KiB/s): min= 96, max= 2952, per=8.41%, avg=672.00, stdev=1274.58, samples=5 00:09:51.805 iops : min= 24, max= 738, avg=168.00, stdev=318.64, samples=5 00:09:51.805 lat (usec) : 1000=27.27% 00:09:51.805 lat (msec) : 2=58.97%, 50=13.52% 00:09:51.805 cpu : usr=0.11%, sys=0.57%, ctx=429, majf=0, minf=2 00:09:51.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.805 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.805 issued rwts: total=429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.805 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3260174: Wed Nov 20 07:24:09 2024 00:09:51.805 read: IOPS=57, BW=230KiB/s (235kB/s)(600KiB/2613msec) 00:09:51.805 slat (nsec): min=25149, max=64207, avg=27675.80, stdev=4782.74 00:09:51.805 clat (usec): min=771, max=42178, avg=17236.79, stdev=19914.49 00:09:51.805 lat (usec): min=798, max=42206, avg=17264.43, stdev=19915.00 00:09:51.805 clat percentiles (usec): 00:09:51.805 | 1.00th=[ 799], 5.00th=[ 906], 10.00th=[ 971], 20.00th=[ 1020], 00:09:51.805 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1205], 00:09:51.805 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:51.805 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:51.805 | 99.99th=[42206] 00:09:51.805 bw ( KiB/s): min= 96, max= 768, per=2.94%, avg=235.20, stdev=298.03, samples=5 00:09:51.805 iops : min= 24, max= 192, avg=58.80, stdev=74.51, samples=5 00:09:51.805 lat (usec) : 1000=13.91% 00:09:51.805 lat (msec) : 2=45.70%, 50=39.74% 00:09:51.805 cpu : usr=0.00%, sys=0.31%, ctx=152, majf=0, minf=1 00:09:51.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.805 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.805 issued rwts: total=151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.805 00:09:51.805 Run status group 0 (all jobs): 00:09:51.805 READ: bw=7991KiB/s (8183kB/s), 230KiB/s-3754KiB/s (235kB/s-3844kB/s), io=24.6MiB (25.8MB), run=2613-3154msec 00:09:51.805 00:09:51.805 Disk stats (read/write): 00:09:51.805 nvme0n1: ios=2690/0, merge=0/0, ticks=2480/0, in_queue=2480, util=94.02% 00:09:51.805 nvme0n2: ios=2921/0, merge=0/0, ticks=2835/0, in_queue=2835, util=92.51% 00:09:51.805 nvme0n3: ios=423/0, merge=0/0, ticks=2572/0, in_queue=2572, util=96.07% 00:09:51.805 nvme0n4: ios=150/0, merge=0/0, ticks=2584/0, in_queue=2584, util=96.43% 00:09:52.065 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.065 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:52.065 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.065 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:52.325 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.325 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:52.586 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.586 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:52.846 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:52.846 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3259908 00:09:52.846 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:52.846 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:52.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.846 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:52.846 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:09:52.846 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.846 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:52.846 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:52.846 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.846 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:09:52.846 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:52.846 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:52.846 nvmf hotplug test: fio failed as expected 00:09:52.846 07:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:53.107 rmmod nvme_tcp 00:09:53.107 rmmod nvme_fabrics 00:09:53.107 rmmod nvme_keyring 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3256278 ']' 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3256278 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3256278 ']' 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3256278 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3256278 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3256278' 00:09:53.107 killing process with pid 3256278 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3256278 00:09:53.107 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3256278 00:09:53.366 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:53.366 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:53.366 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:53.366 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:53.366 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:53.366 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:53.366 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:53.366 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.366 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:53.366 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.366 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.366 07:24:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.277 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:55.277 00:09:55.277 real 0m29.613s 00:09:55.277 user 2m29.376s 00:09:55.277 sys 0m9.704s 00:09:55.277 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:55.277 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.277 ************************************ 00:09:55.277 END TEST nvmf_fio_target 00:09:55.277 ************************************ 00:09:55.277 07:24:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:55.277 07:24:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:55.277 07:24:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:55.277 07:24:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.539 ************************************ 00:09:55.539 START TEST nvmf_bdevio 00:09:55.539 ************************************ 00:09:55.539 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:55.539 * Looking for test storage... 00:09:55.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.539 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:55.539 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:55.539 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:55.539 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:55.539 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.539 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.539 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.539 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.539 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.539 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.539 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.539 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.539 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.539 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:55.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.540 --rc genhtml_branch_coverage=1 00:09:55.540 --rc genhtml_function_coverage=1 00:09:55.540 --rc genhtml_legend=1 00:09:55.540 --rc geninfo_all_blocks=1 00:09:55.540 --rc geninfo_unexecuted_blocks=1 00:09:55.540 00:09:55.540 ' 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:55.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.540 --rc genhtml_branch_coverage=1 00:09:55.540 --rc genhtml_function_coverage=1 00:09:55.540 --rc genhtml_legend=1 00:09:55.540 --rc geninfo_all_blocks=1 00:09:55.540 --rc geninfo_unexecuted_blocks=1 00:09:55.540 00:09:55.540 ' 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:55.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.540 --rc genhtml_branch_coverage=1 00:09:55.540 --rc genhtml_function_coverage=1 00:09:55.540 --rc genhtml_legend=1 00:09:55.540 --rc geninfo_all_blocks=1 00:09:55.540 --rc geninfo_unexecuted_blocks=1 00:09:55.540 00:09:55.540 ' 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:55.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.540 --rc genhtml_branch_coverage=1 00:09:55.540 --rc genhtml_function_coverage=1 00:09:55.540 --rc genhtml_legend=1 00:09:55.540 --rc geninfo_all_blocks=1 00:09:55.540 --rc geninfo_unexecuted_blocks=1 00:09:55.540 00:09:55.540 ' 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.540 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.802 07:24:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:03.948 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:03.949 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:03.949 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:03.949 Found net devices under 0000:31:00.0: cvl_0_0 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:03.949 Found net devices under 0000:31:00.1: cvl_0_1 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:03.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:10:03.949 00:10:03.949 --- 10.0.0.2 ping statistics --- 00:10:03.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.949 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:10:03.949 00:10:03.949 --- 10.0.0.1 ping statistics --- 00:10:03.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.949 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3265953 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3265953 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3265953 ']' 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:03.949 07:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.949 [2024-11-20 07:24:21.460618] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:10:03.949 [2024-11-20 07:24:21.460684] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.949 [2024-11-20 07:24:21.566673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.949 [2024-11-20 07:24:21.616700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.949 [2024-11-20 07:24:21.616763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.949 [2024-11-20 07:24:21.616773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.949 [2024-11-20 07:24:21.616781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.950 [2024-11-20 07:24:21.616787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.950 [2024-11-20 07:24:21.618821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:03.950 [2024-11-20 07:24:21.618986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:03.950 [2024-11-20 07:24:21.619154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:03.950 [2024-11-20 07:24:21.619158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.211 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:04.211 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:04.211 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:04.211 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.211 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.212 [2024-11-20 07:24:22.339667] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.212 Malloc0 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.212 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.474 [2024-11-20 07:24:22.419121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.474 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.474 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:04.474 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:04.474 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:04.474 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:04.474 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:04.474 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:04.474 { 00:10:04.474 "params": { 00:10:04.474 "name": "Nvme$subsystem", 00:10:04.474 "trtype": "$TEST_TRANSPORT", 00:10:04.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.474 "adrfam": "ipv4", 00:10:04.474 "trsvcid": "$NVMF_PORT", 00:10:04.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.474 "hdgst": ${hdgst:-false}, 00:10:04.474 "ddgst": ${ddgst:-false} 00:10:04.474 }, 00:10:04.474 "method": "bdev_nvme_attach_controller" 00:10:04.474 } 00:10:04.474 EOF 00:10:04.474 )") 00:10:04.474 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:04.474 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:04.474 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:04.474 07:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:04.474 "params": { 00:10:04.474 "name": "Nvme1", 00:10:04.474 "trtype": "tcp", 00:10:04.474 "traddr": "10.0.0.2", 00:10:04.474 "adrfam": "ipv4", 00:10:04.474 "trsvcid": "4420", 00:10:04.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:04.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:04.474 "hdgst": false, 00:10:04.474 "ddgst": false 00:10:04.474 }, 00:10:04.474 "method": "bdev_nvme_attach_controller" 00:10:04.474 }' 00:10:04.474 [2024-11-20 07:24:22.485618] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:10:04.474 [2024-11-20 07:24:22.485683] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3266046 ] 00:10:04.474 [2024-11-20 07:24:22.580585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:04.474 [2024-11-20 07:24:22.637465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.474 [2024-11-20 07:24:22.637629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.474 [2024-11-20 07:24:22.637630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.047 I/O targets: 00:10:05.047 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:05.047 00:10:05.047 00:10:05.047 CUnit - A unit testing framework for C - Version 2.1-3 00:10:05.047 http://cunit.sourceforge.net/ 00:10:05.047 00:10:05.047 00:10:05.047 Suite: bdevio tests on: Nvme1n1 00:10:05.047 Test: blockdev write read block ...passed 00:10:05.047 Test: blockdev write zeroes read block ...passed 00:10:05.047 Test: blockdev write zeroes read no split ...passed 00:10:05.047 Test: blockdev write zeroes read split ...passed 00:10:05.047 Test: blockdev write zeroes read split partial ...passed 00:10:05.047 Test: blockdev reset ...[2024-11-20 07:24:23.190777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:05.047 [2024-11-20 07:24:23.190877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12881c0 (9): Bad file descriptor 00:10:05.307 [2024-11-20 07:24:23.340410] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:05.307 passed 00:10:05.307 Test: blockdev write read 8 blocks ...passed 00:10:05.307 Test: blockdev write read size > 128k ...passed 00:10:05.307 Test: blockdev write read invalid size ...passed 00:10:05.307 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:05.307 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:05.307 Test: blockdev write read max offset ...passed 00:10:05.567 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:05.567 Test: blockdev writev readv 8 blocks ...passed 00:10:05.567 Test: blockdev writev readv 30 x 1block ...passed 00:10:05.567 Test: blockdev writev readv block ...passed 00:10:05.567 Test: blockdev writev readv size > 128k ...passed 00:10:05.567 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:05.567 Test: blockdev comparev and writev ...[2024-11-20 07:24:23.606862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.567 [2024-11-20 07:24:23.606911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:05.567 [2024-11-20 07:24:23.606928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.567 [2024-11-20 07:24:23.606936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:05.567 [2024-11-20 07:24:23.607417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.567 [2024-11-20 07:24:23.607429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:05.567 [2024-11-20 07:24:23.607443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.567 [2024-11-20 07:24:23.607452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:05.567 [2024-11-20 07:24:23.607872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.567 [2024-11-20 07:24:23.607883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:05.567 [2024-11-20 07:24:23.607898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.567 [2024-11-20 07:24:23.607906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:05.567 [2024-11-20 07:24:23.608238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.567 [2024-11-20 07:24:23.608249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:05.567 [2024-11-20 07:24:23.608264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.567 [2024-11-20 07:24:23.608273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:05.567 passed 00:10:05.567 Test: blockdev nvme passthru rw ...passed 00:10:05.567 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:24:23.692687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:05.567 [2024-11-20 07:24:23.692729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:05.567 [2024-11-20 07:24:23.693141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:05.567 [2024-11-20 07:24:23.693153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:05.567 [2024-11-20 07:24:23.693600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:05.567 [2024-11-20 07:24:23.693611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:05.567 [2024-11-20 07:24:23.694023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:05.567 [2024-11-20 07:24:23.694034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:05.567 passed 00:10:05.567 Test: blockdev nvme admin passthru ...passed 00:10:05.567 Test: blockdev copy ...passed 00:10:05.568 00:10:05.568 Run Summary: Type Total Ran Passed Failed Inactive 00:10:05.568 suites 1 1 n/a 0 0 00:10:05.568 tests 23 23 23 0 0 00:10:05.568 asserts 152 152 152 0 n/a 00:10:05.568 00:10:05.568 Elapsed time = 1.571 seconds 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:05.828 rmmod nvme_tcp 00:10:05.828 rmmod nvme_fabrics 00:10:05.828 rmmod nvme_keyring 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3265953 ']' 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3265953 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3265953 ']' 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3265953 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:05.828 07:24:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3265953 00:10:05.828 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:05.828 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:05.828 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3265953' 00:10:05.828 killing process with pid 3265953 00:10:05.828 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3265953 00:10:05.828 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3265953 00:10:06.088 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:06.088 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:06.088 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:06.088 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:06.088 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:06.088 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:06.088 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:06.088 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:06.088 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:06.088 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.088 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.088 07:24:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.001 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:08.261 00:10:08.261 real 0m12.687s 00:10:08.261 user 0m15.243s 00:10:08.261 sys 0m6.308s 00:10:08.261 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:08.261 07:24:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:08.261 ************************************ 00:10:08.261 END TEST nvmf_bdevio 00:10:08.261 ************************************ 00:10:08.261 07:24:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:08.261 00:10:08.261 real 5m6.475s 00:10:08.261 user 11m47.617s 00:10:08.261 sys 1m53.590s 00:10:08.261 07:24:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:08.261 07:24:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:08.261 ************************************ 00:10:08.261 END TEST nvmf_target_core 00:10:08.261 ************************************ 00:10:08.261 07:24:26 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:08.261 07:24:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:08.261 07:24:26 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:08.261 07:24:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:08.261 ************************************ 00:10:08.261 START TEST nvmf_target_extra 00:10:08.261 ************************************ 00:10:08.261 07:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:08.261 * Looking for test storage... 00:10:08.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:08.261 07:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:08.261 07:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:08.261 07:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:08.522 07:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:08.522 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.522 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.522 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.522 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.522 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.522 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.522 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.522 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.522 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.522 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.522 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.522 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:08.522 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:08.522 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.522 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.522 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:08.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.523 --rc genhtml_branch_coverage=1 00:10:08.523 --rc genhtml_function_coverage=1 00:10:08.523 --rc genhtml_legend=1 00:10:08.523 --rc geninfo_all_blocks=1 00:10:08.523 --rc geninfo_unexecuted_blocks=1 00:10:08.523 00:10:08.523 ' 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:08.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.523 --rc genhtml_branch_coverage=1 00:10:08.523 --rc genhtml_function_coverage=1 00:10:08.523 --rc genhtml_legend=1 00:10:08.523 --rc geninfo_all_blocks=1 00:10:08.523 --rc geninfo_unexecuted_blocks=1 00:10:08.523 00:10:08.523 ' 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:08.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.523 --rc genhtml_branch_coverage=1 00:10:08.523 --rc genhtml_function_coverage=1 00:10:08.523 --rc genhtml_legend=1 00:10:08.523 --rc geninfo_all_blocks=1 00:10:08.523 --rc geninfo_unexecuted_blocks=1 00:10:08.523 00:10:08.523 ' 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:08.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.523 --rc genhtml_branch_coverage=1 00:10:08.523 --rc genhtml_function_coverage=1 00:10:08.523 --rc genhtml_legend=1 00:10:08.523 --rc geninfo_all_blocks=1 00:10:08.523 --rc geninfo_unexecuted_blocks=1 00:10:08.523 00:10:08.523 ' 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:08.523 ************************************ 00:10:08.523 START TEST nvmf_example 00:10:08.523 ************************************ 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:08.523 * Looking for test storage... 00:10:08.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:08.523 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:08.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.786 --rc genhtml_branch_coverage=1 00:10:08.786 --rc genhtml_function_coverage=1 00:10:08.786 --rc genhtml_legend=1 00:10:08.786 --rc geninfo_all_blocks=1 00:10:08.786 --rc geninfo_unexecuted_blocks=1 00:10:08.786 00:10:08.786 ' 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:08.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.786 --rc genhtml_branch_coverage=1 00:10:08.786 --rc genhtml_function_coverage=1 00:10:08.786 --rc genhtml_legend=1 00:10:08.786 --rc geninfo_all_blocks=1 00:10:08.786 --rc geninfo_unexecuted_blocks=1 00:10:08.786 00:10:08.786 ' 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:08.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.786 --rc genhtml_branch_coverage=1 00:10:08.786 --rc genhtml_function_coverage=1 00:10:08.786 --rc genhtml_legend=1 00:10:08.786 --rc geninfo_all_blocks=1 00:10:08.786 --rc geninfo_unexecuted_blocks=1 00:10:08.786 00:10:08.786 ' 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:08.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.786 --rc genhtml_branch_coverage=1 00:10:08.786 --rc genhtml_function_coverage=1 00:10:08.786 --rc genhtml_legend=1 00:10:08.786 --rc geninfo_all_blocks=1 00:10:08.786 --rc geninfo_unexecuted_blocks=1 00:10:08.786 00:10:08.786 ' 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.786 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:08.787 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:16.927 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:16.927 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:16.927 Found net devices under 0000:31:00.0: cvl_0_0 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.927 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:16.928 Found net devices under 0000:31:00.1: cvl_0_1 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:16.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:10:16.928 00:10:16.928 --- 10.0.0.2 ping statistics --- 00:10:16.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.928 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:10:16.928 00:10:16.928 --- 10.0.0.1 ping statistics --- 00:10:16.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.928 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3270754 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3270754 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 3270754 ']' 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:16.928 07:24:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:17.500 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:29.737 Initializing NVMe Controllers 00:10:29.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:29.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:29.737 Initialization complete. Launching workers. 00:10:29.737 ======================================================== 00:10:29.737 Latency(us) 00:10:29.737 Device Information : IOPS MiB/s Average min max 00:10:29.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18718.21 73.12 3418.66 621.25 15653.66 00:10:29.737 ======================================================== 00:10:29.737 Total : 18718.21 73.12 3418.66 621.25 15653.66 00:10:29.737 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:29.737 rmmod nvme_tcp 00:10:29.737 rmmod nvme_fabrics 00:10:29.737 rmmod nvme_keyring 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3270754 ']' 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3270754 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 3270754 ']' 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 3270754 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3270754 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3270754' 00:10:29.737 killing process with pid 3270754 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 3270754 00:10:29.737 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 3270754 00:10:29.737 nvmf threads initialize successfully 00:10:29.737 bdev subsystem init successfully 00:10:29.737 created a nvmf target service 00:10:29.737 create targets's poll groups done 00:10:29.737 all subsystems of target started 00:10:29.737 nvmf target is running 00:10:29.737 all subsystems of target stopped 00:10:29.737 destroy targets's poll groups done 00:10:29.737 destroyed the nvmf target service 00:10:29.737 bdev subsystem finish successfully 00:10:29.737 nvmf threads destroy successfully 00:10:29.737 07:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:29.737 07:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:29.737 07:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:29.737 07:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:29.737 07:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:29.737 07:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:29.737 07:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:29.737 07:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:29.737 07:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:29.737 07:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.737 07:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.737 07:24:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.998 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:29.998 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:29.999 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:29.999 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:29.999 00:10:29.999 real 0m21.546s 00:10:29.999 user 0m46.699s 00:10:29.999 sys 0m7.074s 00:10:29.999 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:29.999 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:29.999 ************************************ 00:10:29.999 END TEST nvmf_example 00:10:29.999 ************************************ 00:10:29.999 07:24:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:29.999 07:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:29.999 07:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:29.999 07:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:30.260 ************************************ 00:10:30.260 START TEST nvmf_filesystem 00:10:30.260 ************************************ 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:30.260 * Looking for test storage... 00:10:30.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:30.260 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:30.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.261 --rc genhtml_branch_coverage=1 00:10:30.261 --rc genhtml_function_coverage=1 00:10:30.261 --rc genhtml_legend=1 00:10:30.261 --rc geninfo_all_blocks=1 00:10:30.261 --rc geninfo_unexecuted_blocks=1 00:10:30.261 00:10:30.261 ' 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:30.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.261 --rc genhtml_branch_coverage=1 00:10:30.261 --rc genhtml_function_coverage=1 00:10:30.261 --rc genhtml_legend=1 00:10:30.261 --rc geninfo_all_blocks=1 00:10:30.261 --rc geninfo_unexecuted_blocks=1 00:10:30.261 00:10:30.261 ' 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:30.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.261 --rc genhtml_branch_coverage=1 00:10:30.261 --rc genhtml_function_coverage=1 00:10:30.261 --rc genhtml_legend=1 00:10:30.261 --rc geninfo_all_blocks=1 00:10:30.261 --rc geninfo_unexecuted_blocks=1 00:10:30.261 00:10:30.261 ' 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:30.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.261 --rc genhtml_branch_coverage=1 00:10:30.261 --rc genhtml_function_coverage=1 00:10:30.261 --rc genhtml_legend=1 00:10:30.261 --rc geninfo_all_blocks=1 00:10:30.261 --rc geninfo_unexecuted_blocks=1 00:10:30.261 00:10:30.261 ' 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:30.261 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:30.262 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:30.527 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:30.527 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:30.527 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:30.527 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:30.527 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:30.527 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:30.527 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:30.527 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:30.527 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:30.527 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:30.527 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:30.527 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:30.527 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:30.527 #define SPDK_CONFIG_H 00:10:30.527 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:30.527 #define SPDK_CONFIG_APPS 1 00:10:30.527 #define SPDK_CONFIG_ARCH native 00:10:30.527 #undef SPDK_CONFIG_ASAN 00:10:30.527 #undef SPDK_CONFIG_AVAHI 00:10:30.527 #undef SPDK_CONFIG_CET 00:10:30.527 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:30.527 #define SPDK_CONFIG_COVERAGE 1 00:10:30.527 #define SPDK_CONFIG_CROSS_PREFIX 00:10:30.527 #undef SPDK_CONFIG_CRYPTO 00:10:30.527 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:30.527 #undef SPDK_CONFIG_CUSTOMOCF 00:10:30.527 #undef SPDK_CONFIG_DAOS 00:10:30.527 #define SPDK_CONFIG_DAOS_DIR 00:10:30.527 #define SPDK_CONFIG_DEBUG 1 00:10:30.527 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:30.527 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:30.527 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:30.527 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:30.527 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:30.527 #undef SPDK_CONFIG_DPDK_UADK 00:10:30.527 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:30.527 #define SPDK_CONFIG_EXAMPLES 1 00:10:30.527 #undef SPDK_CONFIG_FC 00:10:30.527 #define SPDK_CONFIG_FC_PATH 00:10:30.527 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:30.527 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:30.527 #define SPDK_CONFIG_FSDEV 1 00:10:30.527 #undef SPDK_CONFIG_FUSE 00:10:30.527 #undef SPDK_CONFIG_FUZZER 00:10:30.527 #define SPDK_CONFIG_FUZZER_LIB 00:10:30.527 #undef SPDK_CONFIG_GOLANG 00:10:30.527 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:30.527 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:30.527 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:30.527 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:30.527 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:30.527 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:30.527 #undef SPDK_CONFIG_HAVE_LZ4 00:10:30.527 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:30.527 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:30.527 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:30.527 #define SPDK_CONFIG_IDXD 1 00:10:30.527 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:30.527 #undef SPDK_CONFIG_IPSEC_MB 00:10:30.527 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:30.527 #define SPDK_CONFIG_ISAL 1 00:10:30.527 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:30.527 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:30.527 #define SPDK_CONFIG_LIBDIR 00:10:30.527 #undef SPDK_CONFIG_LTO 00:10:30.527 #define SPDK_CONFIG_MAX_LCORES 128 00:10:30.527 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:30.527 #define SPDK_CONFIG_NVME_CUSE 1 00:10:30.527 #undef SPDK_CONFIG_OCF 00:10:30.527 #define SPDK_CONFIG_OCF_PATH 00:10:30.527 #define SPDK_CONFIG_OPENSSL_PATH 00:10:30.527 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:30.527 #define SPDK_CONFIG_PGO_DIR 00:10:30.527 #undef SPDK_CONFIG_PGO_USE 00:10:30.527 #define SPDK_CONFIG_PREFIX /usr/local 00:10:30.527 #undef SPDK_CONFIG_RAID5F 00:10:30.527 #undef SPDK_CONFIG_RBD 00:10:30.527 #define SPDK_CONFIG_RDMA 1 00:10:30.527 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:30.527 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:30.527 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:30.527 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:30.527 #define SPDK_CONFIG_SHARED 1 00:10:30.527 #undef SPDK_CONFIG_SMA 00:10:30.528 #define SPDK_CONFIG_TESTS 1 00:10:30.528 #undef SPDK_CONFIG_TSAN 00:10:30.528 #define SPDK_CONFIG_UBLK 1 00:10:30.528 #define SPDK_CONFIG_UBSAN 1 00:10:30.528 #undef SPDK_CONFIG_UNIT_TESTS 00:10:30.528 #undef SPDK_CONFIG_URING 00:10:30.528 #define SPDK_CONFIG_URING_PATH 00:10:30.528 #undef SPDK_CONFIG_URING_ZNS 00:10:30.528 #undef SPDK_CONFIG_USDT 00:10:30.528 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:30.528 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:30.528 #define SPDK_CONFIG_VFIO_USER 1 00:10:30.528 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:30.528 #define SPDK_CONFIG_VHOST 1 00:10:30.528 #define SPDK_CONFIG_VIRTIO 1 00:10:30.528 #undef SPDK_CONFIG_VTUNE 00:10:30.528 #define SPDK_CONFIG_VTUNE_DIR 00:10:30.528 #define SPDK_CONFIG_WERROR 1 00:10:30.528 #define SPDK_CONFIG_WPDK_DIR 00:10:30.528 #undef SPDK_CONFIG_XNVME 00:10:30.528 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:30.528 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:30.529 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:30.530 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3273540 ]] 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3273540 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:30.531 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.zwESBG 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.zwESBG/tests/target /tmp/spdk.zwESBG 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122445811712 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356517376 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6910705664 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668225536 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678256640 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847934976 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871306752 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23371776 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=387072 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=116736 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677728256 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678260736 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=532480 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:30.532 * Looking for test storage... 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122445811712 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9125298176 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:30.532 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:30.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.533 --rc genhtml_branch_coverage=1 00:10:30.533 --rc genhtml_function_coverage=1 00:10:30.533 --rc genhtml_legend=1 00:10:30.533 --rc geninfo_all_blocks=1 00:10:30.533 --rc geninfo_unexecuted_blocks=1 00:10:30.533 00:10:30.533 ' 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:30.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.533 --rc genhtml_branch_coverage=1 00:10:30.533 --rc genhtml_function_coverage=1 00:10:30.533 --rc genhtml_legend=1 00:10:30.533 --rc geninfo_all_blocks=1 00:10:30.533 --rc geninfo_unexecuted_blocks=1 00:10:30.533 00:10:30.533 ' 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:30.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.533 --rc genhtml_branch_coverage=1 00:10:30.533 --rc genhtml_function_coverage=1 00:10:30.533 --rc genhtml_legend=1 00:10:30.533 --rc geninfo_all_blocks=1 00:10:30.533 --rc geninfo_unexecuted_blocks=1 00:10:30.533 00:10:30.533 ' 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:30.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.533 --rc genhtml_branch_coverage=1 00:10:30.533 --rc genhtml_function_coverage=1 00:10:30.533 --rc genhtml_legend=1 00:10:30.533 --rc geninfo_all_blocks=1 00:10:30.533 --rc geninfo_unexecuted_blocks=1 00:10:30.533 00:10:30.533 ' 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.533 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.534 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.534 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.534 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.534 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:30.534 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.795 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:30.795 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.795 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.796 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:38.941 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:38.941 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.941 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:38.941 Found net devices under 0000:31:00.0: cvl_0_0 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:38.942 Found net devices under 0000:31:00.1: cvl_0_1 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:38.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:10:38.942 00:10:38.942 --- 10.0.0.2 ping statistics --- 00:10:38.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.942 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:38.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:10:38.942 00:10:38.942 --- 10.0.0.1 ping statistics --- 00:10:38.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.942 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:38.942 ************************************ 00:10:38.942 START TEST nvmf_filesystem_no_in_capsule 00:10:38.942 ************************************ 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3277513 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3277513 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3277513 ']' 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:38.942 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.942 [2024-11-20 07:24:56.492719] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:10:38.942 [2024-11-20 07:24:56.492788] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.942 [2024-11-20 07:24:56.590998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.942 [2024-11-20 07:24:56.645797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.942 [2024-11-20 07:24:56.645844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.942 [2024-11-20 07:24:56.645853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.942 [2024-11-20 07:24:56.645861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.942 [2024-11-20 07:24:56.645868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.942 [2024-11-20 07:24:56.647793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.942 [2024-11-20 07:24:56.647925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.942 [2024-11-20 07:24:56.648074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.942 [2024-11-20 07:24:56.648075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.202 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:39.202 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:39.202 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.202 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:39.202 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.202 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.202 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:39.202 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:39.202 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.202 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.202 [2024-11-20 07:24:57.369617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.202 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.202 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:39.202 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.202 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.463 Malloc1 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.463 [2024-11-20 07:24:57.536033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:39.463 { 00:10:39.463 "name": "Malloc1", 00:10:39.463 "aliases": [ 00:10:39.463 "1e61e4e8-f91c-45c4-9b0e-3429a2d57803" 00:10:39.463 ], 00:10:39.463 "product_name": "Malloc disk", 00:10:39.463 "block_size": 512, 00:10:39.463 "num_blocks": 1048576, 00:10:39.463 "uuid": "1e61e4e8-f91c-45c4-9b0e-3429a2d57803", 00:10:39.463 "assigned_rate_limits": { 00:10:39.463 "rw_ios_per_sec": 0, 00:10:39.463 "rw_mbytes_per_sec": 0, 00:10:39.463 "r_mbytes_per_sec": 0, 00:10:39.463 "w_mbytes_per_sec": 0 00:10:39.463 }, 00:10:39.463 "claimed": true, 00:10:39.463 "claim_type": "exclusive_write", 00:10:39.463 "zoned": false, 00:10:39.463 "supported_io_types": { 00:10:39.463 "read": true, 00:10:39.463 "write": true, 00:10:39.463 "unmap": true, 00:10:39.463 "flush": true, 00:10:39.463 "reset": true, 00:10:39.463 "nvme_admin": false, 00:10:39.463 "nvme_io": false, 00:10:39.463 "nvme_io_md": false, 00:10:39.463 "write_zeroes": true, 00:10:39.463 "zcopy": true, 00:10:39.463 "get_zone_info": false, 00:10:39.463 "zone_management": false, 00:10:39.463 "zone_append": false, 00:10:39.463 "compare": false, 00:10:39.463 "compare_and_write": false, 00:10:39.463 "abort": true, 00:10:39.463 "seek_hole": false, 00:10:39.463 "seek_data": false, 00:10:39.463 "copy": true, 00:10:39.463 "nvme_iov_md": false 00:10:39.463 }, 00:10:39.463 "memory_domains": [ 00:10:39.463 { 00:10:39.463 "dma_device_id": "system", 00:10:39.463 "dma_device_type": 1 00:10:39.463 }, 00:10:39.463 { 00:10:39.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.463 "dma_device_type": 2 00:10:39.463 } 00:10:39.463 ], 00:10:39.463 "driver_specific": {} 00:10:39.463 } 00:10:39.463 ]' 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:39.463 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:39.464 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:39.464 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:39.464 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:39.464 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:39.464 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:41.377 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:41.377 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:41.377 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.377 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:41.377 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:43.290 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:43.290 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:43.290 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.291 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:43.291 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.291 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:43.291 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:43.291 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:43.291 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:43.291 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:43.291 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:43.291 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:43.291 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:43.291 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:43.291 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:43.291 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:43.291 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:43.291 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:43.862 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:44.807 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:44.807 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:44.807 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:44.807 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:44.807 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.807 ************************************ 00:10:44.807 START TEST filesystem_ext4 00:10:44.807 ************************************ 00:10:44.807 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:44.807 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:44.807 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:44.807 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:44.807 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:44.807 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:44.807 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:44.807 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:44.807 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:44.807 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:44.807 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:44.807 mke2fs 1.47.0 (5-Feb-2023) 00:10:44.807 Discarding device blocks: 0/522240 done 00:10:44.807 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:44.807 Filesystem UUID: 5e303f3e-0746-482e-830c-8eb4f81e9d9f 00:10:44.807 Superblock backups stored on blocks: 00:10:44.807 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:44.807 00:10:44.807 Allocating group tables: 0/64 done 00:10:44.807 Writing inode tables: 0/64 done 00:10:44.807 Creating journal (8192 blocks): done 00:10:45.068 Writing superblocks and filesystem accounting information: 0/64 done 00:10:45.068 00:10:45.068 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:45.068 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:50.421 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:50.421 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:50.421 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:50.421 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:50.421 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:50.421 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:50.421 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3277513 00:10:50.421 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:50.421 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:50.421 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:50.421 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:50.421 00:10:50.421 real 0m5.688s 00:10:50.421 user 0m0.026s 00:10:50.421 sys 0m0.078s 00:10:50.421 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:50.422 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:50.422 ************************************ 00:10:50.422 END TEST filesystem_ext4 00:10:50.422 ************************************ 00:10:50.422 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:50.422 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:50.422 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:50.422 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.682 ************************************ 00:10:50.682 START TEST filesystem_btrfs 00:10:50.682 ************************************ 00:10:50.682 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:50.682 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:50.682 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:50.682 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:50.682 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:50.682 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:50.682 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:50.682 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:50.682 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:50.682 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:50.682 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:50.682 btrfs-progs v6.8.1 00:10:50.682 See https://btrfs.readthedocs.io for more information. 00:10:50.682 00:10:50.682 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:50.682 NOTE: several default settings have changed in version 5.15, please make sure 00:10:50.682 this does not affect your deployments: 00:10:50.682 - DUP for metadata (-m dup) 00:10:50.682 - enabled no-holes (-O no-holes) 00:10:50.682 - enabled free-space-tree (-R free-space-tree) 00:10:50.682 00:10:50.682 Label: (null) 00:10:50.682 UUID: 3b495be7-ed43-44f3-a0c1-a8cc6d244fa6 00:10:50.682 Node size: 16384 00:10:50.682 Sector size: 4096 (CPU page size: 4096) 00:10:50.682 Filesystem size: 510.00MiB 00:10:50.682 Block group profiles: 00:10:50.682 Data: single 8.00MiB 00:10:50.682 Metadata: DUP 32.00MiB 00:10:50.682 System: DUP 8.00MiB 00:10:50.682 SSD detected: yes 00:10:50.682 Zoned device: no 00:10:50.682 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:50.682 Checksum: crc32c 00:10:50.682 Number of devices: 1 00:10:50.682 Devices: 00:10:50.682 ID SIZE PATH 00:10:50.682 1 510.00MiB /dev/nvme0n1p1 00:10:50.682 00:10:50.682 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:50.682 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3277513 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:51.623 00:10:51.623 real 0m0.966s 00:10:51.623 user 0m0.035s 00:10:51.623 sys 0m0.115s 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:51.623 ************************************ 00:10:51.623 END TEST filesystem_btrfs 00:10:51.623 ************************************ 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.623 ************************************ 00:10:51.623 START TEST filesystem_xfs 00:10:51.623 ************************************ 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:51.623 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:51.623 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:51.623 = sectsz=512 attr=2, projid32bit=1 00:10:51.623 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:51.623 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:51.623 data = bsize=4096 blocks=130560, imaxpct=25 00:10:51.623 = sunit=0 swidth=0 blks 00:10:51.623 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:51.623 log =internal log bsize=4096 blocks=16384, version=2 00:10:51.623 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:51.623 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:53.008 Discarding blocks...Done. 00:10:53.008 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:53.008 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:55.554 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:55.554 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:55.554 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:55.554 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:55.554 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:55.554 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:55.554 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3277513 00:10:55.554 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:55.554 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:55.554 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:55.554 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:55.554 00:10:55.554 real 0m3.626s 00:10:55.554 user 0m0.025s 00:10:55.554 sys 0m0.081s 00:10:55.554 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.554 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:55.554 ************************************ 00:10:55.554 END TEST filesystem_xfs 00:10:55.554 ************************************ 00:10:55.554 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:55.554 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:55.554 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3277513 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3277513 ']' 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3277513 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:55.815 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3277513 00:10:55.816 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:55.816 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:55.816 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3277513' 00:10:55.816 killing process with pid 3277513 00:10:55.816 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 3277513 00:10:55.816 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 3277513 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:56.077 00:10:56.077 real 0m17.671s 00:10:56.077 user 1m9.752s 00:10:56.077 sys 0m1.444s 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.077 ************************************ 00:10:56.077 END TEST nvmf_filesystem_no_in_capsule 00:10:56.077 ************************************ 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.077 ************************************ 00:10:56.077 START TEST nvmf_filesystem_in_capsule 00:10:56.077 ************************************ 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3281130 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3281130 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3281130 ']' 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:56.077 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.077 [2024-11-20 07:25:14.236726] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:10:56.077 [2024-11-20 07:25:14.236781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.338 [2024-11-20 07:25:14.330808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.338 [2024-11-20 07:25:14.364161] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.338 [2024-11-20 07:25:14.364190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.339 [2024-11-20 07:25:14.364196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.339 [2024-11-20 07:25:14.364201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.339 [2024-11-20 07:25:14.364206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.339 [2024-11-20 07:25:14.365784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.339 [2024-11-20 07:25:14.365869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.339 [2024-11-20 07:25:14.365995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.339 [2024-11-20 07:25:14.365996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.912 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:56.912 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:56.912 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:56.912 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:56.912 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.912 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.912 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:56.912 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:56.912 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.912 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.912 [2024-11-20 07:25:15.089749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.912 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.912 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:56.912 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.912 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.174 Malloc1 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.174 [2024-11-20 07:25:15.232526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:57.174 { 00:10:57.174 "name": "Malloc1", 00:10:57.174 "aliases": [ 00:10:57.174 "36a135a6-ba54-45ab-a6d6-0549bf8783de" 00:10:57.174 ], 00:10:57.174 "product_name": "Malloc disk", 00:10:57.174 "block_size": 512, 00:10:57.174 "num_blocks": 1048576, 00:10:57.174 "uuid": "36a135a6-ba54-45ab-a6d6-0549bf8783de", 00:10:57.174 "assigned_rate_limits": { 00:10:57.174 "rw_ios_per_sec": 0, 00:10:57.174 "rw_mbytes_per_sec": 0, 00:10:57.174 "r_mbytes_per_sec": 0, 00:10:57.174 "w_mbytes_per_sec": 0 00:10:57.174 }, 00:10:57.174 "claimed": true, 00:10:57.174 "claim_type": "exclusive_write", 00:10:57.174 "zoned": false, 00:10:57.174 "supported_io_types": { 00:10:57.174 "read": true, 00:10:57.174 "write": true, 00:10:57.174 "unmap": true, 00:10:57.174 "flush": true, 00:10:57.174 "reset": true, 00:10:57.174 "nvme_admin": false, 00:10:57.174 "nvme_io": false, 00:10:57.174 "nvme_io_md": false, 00:10:57.174 "write_zeroes": true, 00:10:57.174 "zcopy": true, 00:10:57.174 "get_zone_info": false, 00:10:57.174 "zone_management": false, 00:10:57.174 "zone_append": false, 00:10:57.174 "compare": false, 00:10:57.174 "compare_and_write": false, 00:10:57.174 "abort": true, 00:10:57.174 "seek_hole": false, 00:10:57.174 "seek_data": false, 00:10:57.174 "copy": true, 00:10:57.174 "nvme_iov_md": false 00:10:57.174 }, 00:10:57.174 "memory_domains": [ 00:10:57.174 { 00:10:57.174 "dma_device_id": "system", 00:10:57.174 "dma_device_type": 1 00:10:57.174 }, 00:10:57.174 { 00:10:57.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.174 "dma_device_type": 2 00:10:57.174 } 00:10:57.174 ], 00:10:57.174 "driver_specific": {} 00:10:57.174 } 00:10:57.174 ]' 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:57.174 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.090 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:59.090 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:59.090 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:59.090 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:59.090 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:01.004 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:01.004 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:01.004 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.004 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:01.004 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.004 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:01.004 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:01.004 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:01.004 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:01.004 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:01.004 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:01.004 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:01.004 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:01.004 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:01.004 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:01.004 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:01.004 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:01.004 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:01.264 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:02.648 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:02.648 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:02.648 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:02.648 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:02.648 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.648 ************************************ 00:11:02.648 START TEST filesystem_in_capsule_ext4 00:11:02.648 ************************************ 00:11:02.648 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:02.648 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:02.648 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:02.648 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:02.648 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:02.648 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:02.648 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:02.648 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:02.648 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:02.648 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:02.648 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:02.648 mke2fs 1.47.0 (5-Feb-2023) 00:11:02.648 Discarding device blocks: 0/522240 done 00:11:02.648 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:02.648 Filesystem UUID: 5270aaf1-ee2b-43f7-8591-62b05ff19949 00:11:02.648 Superblock backups stored on blocks: 00:11:02.648 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:02.648 00:11:02.648 Allocating group tables: 0/64 done 00:11:02.648 Writing inode tables: 0/64 done 00:11:03.218 Creating journal (8192 blocks): done 00:11:03.218 Writing superblocks and filesystem accounting information: 0/64 done 00:11:03.218 00:11:03.218 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:03.218 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3281130 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:09.801 00:11:09.801 real 0m7.133s 00:11:09.801 user 0m0.027s 00:11:09.801 sys 0m0.078s 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:09.801 ************************************ 00:11:09.801 END TEST filesystem_in_capsule_ext4 00:11:09.801 ************************************ 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.801 ************************************ 00:11:09.801 START TEST filesystem_in_capsule_btrfs 00:11:09.801 ************************************ 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:09.801 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:10.062 btrfs-progs v6.8.1 00:11:10.062 See https://btrfs.readthedocs.io for more information. 00:11:10.062 00:11:10.062 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:10.062 NOTE: several default settings have changed in version 5.15, please make sure 00:11:10.062 this does not affect your deployments: 00:11:10.062 - DUP for metadata (-m dup) 00:11:10.062 - enabled no-holes (-O no-holes) 00:11:10.062 - enabled free-space-tree (-R free-space-tree) 00:11:10.062 00:11:10.062 Label: (null) 00:11:10.062 UUID: cc4bd810-2bc4-4585-b886-cfba3acf32bb 00:11:10.062 Node size: 16384 00:11:10.062 Sector size: 4096 (CPU page size: 4096) 00:11:10.062 Filesystem size: 510.00MiB 00:11:10.062 Block group profiles: 00:11:10.062 Data: single 8.00MiB 00:11:10.063 Metadata: DUP 32.00MiB 00:11:10.063 System: DUP 8.00MiB 00:11:10.063 SSD detected: yes 00:11:10.063 Zoned device: no 00:11:10.063 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:10.063 Checksum: crc32c 00:11:10.063 Number of devices: 1 00:11:10.063 Devices: 00:11:10.063 ID SIZE PATH 00:11:10.063 1 510.00MiB /dev/nvme0n1p1 00:11:10.063 00:11:10.063 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:10.063 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.633 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.633 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3281130 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.894 00:11:10.894 real 0m1.206s 00:11:10.894 user 0m0.027s 00:11:10.894 sys 0m0.124s 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:10.894 ************************************ 00:11:10.894 END TEST filesystem_in_capsule_btrfs 00:11:10.894 ************************************ 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.894 ************************************ 00:11:10.894 START TEST filesystem_in_capsule_xfs 00:11:10.894 ************************************ 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:10.894 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:10.894 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:10.894 = sectsz=512 attr=2, projid32bit=1 00:11:10.894 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:10.894 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:10.894 data = bsize=4096 blocks=130560, imaxpct=25 00:11:10.894 = sunit=0 swidth=0 blks 00:11:10.894 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:10.894 log =internal log bsize=4096 blocks=16384, version=2 00:11:10.894 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:10.894 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:11.844 Discarding blocks...Done. 00:11:11.844 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:11.844 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:14.387 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:14.387 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:14.387 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:14.387 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:14.387 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:14.387 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:14.387 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3281130 00:11:14.387 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:14.387 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:14.387 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:14.387 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:14.387 00:11:14.387 real 0m3.299s 00:11:14.387 user 0m0.026s 00:11:14.387 sys 0m0.080s 00:11:14.387 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.387 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:14.387 ************************************ 00:11:14.387 END TEST filesystem_in_capsule_xfs 00:11:14.387 ************************************ 00:11:14.387 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:14.648 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:14.909 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:14.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.909 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:14.909 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:14.909 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:14.909 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3281130 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3281130 ']' 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3281130 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3281130 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3281130' 00:11:15.170 killing process with pid 3281130 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 3281130 00:11:15.170 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 3281130 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:15.431 00:11:15.431 real 0m19.241s 00:11:15.431 user 1m16.145s 00:11:15.431 sys 0m1.391s 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.431 ************************************ 00:11:15.431 END TEST nvmf_filesystem_in_capsule 00:11:15.431 ************************************ 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.431 rmmod nvme_tcp 00:11:15.431 rmmod nvme_fabrics 00:11:15.431 rmmod nvme_keyring 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.431 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.976 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:17.976 00:11:17.976 real 0m47.355s 00:11:17.976 user 2m28.328s 00:11:17.976 sys 0m8.799s 00:11:17.976 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:17.976 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.976 ************************************ 00:11:17.976 END TEST nvmf_filesystem 00:11:17.976 ************************************ 00:11:17.976 07:25:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:17.976 07:25:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:17.976 07:25:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:17.976 07:25:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:17.976 ************************************ 00:11:17.976 START TEST nvmf_target_discovery 00:11:17.976 ************************************ 00:11:17.976 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:17.976 * Looking for test storage... 00:11:17.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.976 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:17.976 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:17.976 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:17.976 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:17.976 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.976 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.976 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.976 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.976 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:17.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.977 --rc genhtml_branch_coverage=1 00:11:17.977 --rc genhtml_function_coverage=1 00:11:17.977 --rc genhtml_legend=1 00:11:17.977 --rc geninfo_all_blocks=1 00:11:17.977 --rc geninfo_unexecuted_blocks=1 00:11:17.977 00:11:17.977 ' 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:17.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.977 --rc genhtml_branch_coverage=1 00:11:17.977 --rc genhtml_function_coverage=1 00:11:17.977 --rc genhtml_legend=1 00:11:17.977 --rc geninfo_all_blocks=1 00:11:17.977 --rc geninfo_unexecuted_blocks=1 00:11:17.977 00:11:17.977 ' 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:17.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.977 --rc genhtml_branch_coverage=1 00:11:17.977 --rc genhtml_function_coverage=1 00:11:17.977 --rc genhtml_legend=1 00:11:17.977 --rc geninfo_all_blocks=1 00:11:17.977 --rc geninfo_unexecuted_blocks=1 00:11:17.977 00:11:17.977 ' 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:17.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.977 --rc genhtml_branch_coverage=1 00:11:17.977 --rc genhtml_function_coverage=1 00:11:17.977 --rc genhtml_legend=1 00:11:17.977 --rc geninfo_all_blocks=1 00:11:17.977 --rc geninfo_unexecuted_blocks=1 00:11:17.977 00:11:17.977 ' 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.977 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:17.978 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:17.978 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:17.978 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.978 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.978 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.978 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:17.978 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:17.978 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:17.978 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:26.143 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:26.143 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:26.143 Found net devices under 0000:31:00.0: cvl_0_0 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:26.143 Found net devices under 0000:31:00.1: cvl_0_1 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.143 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:11:26.144 00:11:26.144 --- 10.0.0.2 ping statistics --- 00:11:26.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.144 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:11:26.144 00:11:26.144 --- 10.0.0.1 ping statistics --- 00:11:26.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.144 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3289300 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3289300 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 3289300 ']' 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:26.144 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.144 [2024-11-20 07:25:43.502442] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:11:26.144 [2024-11-20 07:25:43.502511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.144 [2024-11-20 07:25:43.605802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.144 [2024-11-20 07:25:43.659136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.144 [2024-11-20 07:25:43.659188] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.144 [2024-11-20 07:25:43.659198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.144 [2024-11-20 07:25:43.659205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.144 [2024-11-20 07:25:43.659211] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.144 [2024-11-20 07:25:43.661602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.144 [2024-11-20 07:25:43.661783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.144 [2024-11-20 07:25:43.661891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.144 [2024-11-20 07:25:43.661893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.144 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:26.144 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:11:26.144 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.144 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:26.144 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.405 [2024-11-20 07:25:44.375451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.405 Null1 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.405 [2024-11-20 07:25:44.443028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.405 Null2 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.405 Null3 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:26.405 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.406 Null4 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.406 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.667 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.667 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 4420 00:11:26.667 00:11:26.667 Discovery Log Number of Records 6, Generation counter 6 00:11:26.667 =====Discovery Log Entry 0====== 00:11:26.667 trtype: tcp 00:11:26.667 adrfam: ipv4 00:11:26.667 subtype: current discovery subsystem 00:11:26.667 treq: not required 00:11:26.667 portid: 0 00:11:26.667 trsvcid: 4420 00:11:26.667 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:26.667 traddr: 10.0.0.2 00:11:26.667 eflags: explicit discovery connections, duplicate discovery information 00:11:26.667 sectype: none 00:11:26.667 =====Discovery Log Entry 1====== 00:11:26.667 trtype: tcp 00:11:26.667 adrfam: ipv4 00:11:26.667 subtype: nvme subsystem 00:11:26.667 treq: not required 00:11:26.667 portid: 0 00:11:26.667 trsvcid: 4420 00:11:26.667 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:26.667 traddr: 10.0.0.2 00:11:26.667 eflags: none 00:11:26.668 sectype: none 00:11:26.668 =====Discovery Log Entry 2====== 00:11:26.668 trtype: tcp 00:11:26.668 adrfam: ipv4 00:11:26.668 subtype: nvme subsystem 00:11:26.668 treq: not required 00:11:26.668 portid: 0 00:11:26.668 trsvcid: 4420 00:11:26.668 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:26.668 traddr: 10.0.0.2 00:11:26.668 eflags: none 00:11:26.668 sectype: none 00:11:26.668 =====Discovery Log Entry 3====== 00:11:26.668 trtype: tcp 00:11:26.668 adrfam: ipv4 00:11:26.668 subtype: nvme subsystem 00:11:26.668 treq: not required 00:11:26.668 portid: 0 00:11:26.668 trsvcid: 4420 00:11:26.668 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:26.668 traddr: 10.0.0.2 00:11:26.668 eflags: none 00:11:26.668 sectype: none 00:11:26.668 =====Discovery Log Entry 4====== 00:11:26.668 trtype: tcp 00:11:26.668 adrfam: ipv4 00:11:26.668 subtype: nvme subsystem 00:11:26.668 treq: not required 00:11:26.668 portid: 0 00:11:26.668 trsvcid: 4420 00:11:26.668 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:26.668 traddr: 10.0.0.2 00:11:26.668 eflags: none 00:11:26.668 sectype: none 00:11:26.668 =====Discovery Log Entry 5====== 00:11:26.668 trtype: tcp 00:11:26.668 adrfam: ipv4 00:11:26.668 subtype: discovery subsystem referral 00:11:26.668 treq: not required 00:11:26.668 portid: 0 00:11:26.668 trsvcid: 4430 00:11:26.668 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:26.668 traddr: 10.0.0.2 00:11:26.668 eflags: none 00:11:26.668 sectype: none 00:11:26.668 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:26.668 Perform nvmf subsystem discovery via RPC 00:11:26.668 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:26.668 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.668 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.668 [ 00:11:26.668 { 00:11:26.668 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:26.668 "subtype": "Discovery", 00:11:26.668 "listen_addresses": [ 00:11:26.668 { 00:11:26.668 "trtype": "TCP", 00:11:26.668 "adrfam": "IPv4", 00:11:26.668 "traddr": "10.0.0.2", 00:11:26.668 "trsvcid": "4420" 00:11:26.668 } 00:11:26.668 ], 00:11:26.668 "allow_any_host": true, 00:11:26.668 "hosts": [] 00:11:26.668 }, 00:11:26.668 { 00:11:26.668 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:26.668 "subtype": "NVMe", 00:11:26.668 "listen_addresses": [ 00:11:26.668 { 00:11:26.668 "trtype": "TCP", 00:11:26.668 "adrfam": "IPv4", 00:11:26.668 "traddr": "10.0.0.2", 00:11:26.668 "trsvcid": "4420" 00:11:26.668 } 00:11:26.668 ], 00:11:26.668 "allow_any_host": true, 00:11:26.668 "hosts": [], 00:11:26.668 "serial_number": "SPDK00000000000001", 00:11:26.668 "model_number": "SPDK bdev Controller", 00:11:26.668 "max_namespaces": 32, 00:11:26.668 "min_cntlid": 1, 00:11:26.668 "max_cntlid": 65519, 00:11:26.668 "namespaces": [ 00:11:26.668 { 00:11:26.668 "nsid": 1, 00:11:26.668 "bdev_name": "Null1", 00:11:26.668 "name": "Null1", 00:11:26.668 "nguid": "97353020330E4AE6BA93D7A97BE7B626", 00:11:26.668 "uuid": "97353020-330e-4ae6-ba93-d7a97be7b626" 00:11:26.668 } 00:11:26.668 ] 00:11:26.668 }, 00:11:26.668 { 00:11:26.668 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:26.668 "subtype": "NVMe", 00:11:26.668 "listen_addresses": [ 00:11:26.668 { 00:11:26.668 "trtype": "TCP", 00:11:26.668 "adrfam": "IPv4", 00:11:26.668 "traddr": "10.0.0.2", 00:11:26.668 "trsvcid": "4420" 00:11:26.668 } 00:11:26.668 ], 00:11:26.668 "allow_any_host": true, 00:11:26.668 "hosts": [], 00:11:26.668 "serial_number": "SPDK00000000000002", 00:11:26.668 "model_number": "SPDK bdev Controller", 00:11:26.668 "max_namespaces": 32, 00:11:26.668 "min_cntlid": 1, 00:11:26.668 "max_cntlid": 65519, 00:11:26.668 "namespaces": [ 00:11:26.668 { 00:11:26.668 "nsid": 1, 00:11:26.668 "bdev_name": "Null2", 00:11:26.668 "name": "Null2", 00:11:26.668 "nguid": "ED1266A8C20249B9A3AD5D70DB75C837", 00:11:26.668 "uuid": "ed1266a8-c202-49b9-a3ad-5d70db75c837" 00:11:26.668 } 00:11:26.668 ] 00:11:26.668 }, 00:11:26.668 { 00:11:26.668 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:26.668 "subtype": "NVMe", 00:11:26.668 "listen_addresses": [ 00:11:26.668 { 00:11:26.668 "trtype": "TCP", 00:11:26.668 "adrfam": "IPv4", 00:11:26.668 "traddr": "10.0.0.2", 00:11:26.668 "trsvcid": "4420" 00:11:26.668 } 00:11:26.668 ], 00:11:26.668 "allow_any_host": true, 00:11:26.668 "hosts": [], 00:11:26.668 "serial_number": "SPDK00000000000003", 00:11:26.668 "model_number": "SPDK bdev Controller", 00:11:26.668 "max_namespaces": 32, 00:11:26.668 "min_cntlid": 1, 00:11:26.668 "max_cntlid": 65519, 00:11:26.668 "namespaces": [ 00:11:26.668 { 00:11:26.668 "nsid": 1, 00:11:26.668 "bdev_name": "Null3", 00:11:26.668 "name": "Null3", 00:11:26.668 "nguid": "47F6153526C740FA82ADF045221C4680", 00:11:26.668 "uuid": "47f61535-26c7-40fa-82ad-f045221c4680" 00:11:26.668 } 00:11:26.668 ] 00:11:26.668 }, 00:11:26.668 { 00:11:26.668 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:26.668 "subtype": "NVMe", 00:11:26.668 "listen_addresses": [ 00:11:26.668 { 00:11:26.668 "trtype": "TCP", 00:11:26.668 "adrfam": "IPv4", 00:11:26.668 "traddr": "10.0.0.2", 00:11:26.669 "trsvcid": "4420" 00:11:26.669 } 00:11:26.669 ], 00:11:26.669 "allow_any_host": true, 00:11:26.669 "hosts": [], 00:11:26.669 "serial_number": "SPDK00000000000004", 00:11:26.669 "model_number": "SPDK bdev Controller", 00:11:26.669 "max_namespaces": 32, 00:11:26.669 "min_cntlid": 1, 00:11:26.669 "max_cntlid": 65519, 00:11:26.669 "namespaces": [ 00:11:26.669 { 00:11:26.669 "nsid": 1, 00:11:26.669 "bdev_name": "Null4", 00:11:26.669 "name": "Null4", 00:11:26.669 "nguid": "E7F2F96024774633B645583C8A06118E", 00:11:26.669 "uuid": "e7f2f960-2477-4633-b645-583c8a06118e" 00:11:26.669 } 00:11:26.669 ] 00:11:26.669 } 00:11:26.669 ] 00:11:26.669 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.669 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:26.669 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.669 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.669 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.669 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.930 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:26.931 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:26.931 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.931 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.931 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.931 rmmod nvme_tcp 00:11:26.931 rmmod nvme_fabrics 00:11:26.931 rmmod nvme_keyring 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3289300 ']' 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3289300 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 3289300 ']' 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 3289300 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:26.931 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3289300 00:11:27.192 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:27.192 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:27.192 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3289300' 00:11:27.192 killing process with pid 3289300 00:11:27.192 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 3289300 00:11:27.192 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 3289300 00:11:27.192 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:27.192 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:27.192 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:27.192 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:27.192 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:27.192 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:27.192 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:27.192 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.192 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:27.192 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.192 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.192 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:29.735 00:11:29.735 real 0m11.731s 00:11:29.735 user 0m8.995s 00:11:29.735 sys 0m6.131s 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:29.735 ************************************ 00:11:29.735 END TEST nvmf_target_discovery 00:11:29.735 ************************************ 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:29.735 ************************************ 00:11:29.735 START TEST nvmf_referrals 00:11:29.735 ************************************ 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:29.735 * Looking for test storage... 00:11:29.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:29.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.735 --rc genhtml_branch_coverage=1 00:11:29.735 --rc genhtml_function_coverage=1 00:11:29.735 --rc genhtml_legend=1 00:11:29.735 --rc geninfo_all_blocks=1 00:11:29.735 --rc geninfo_unexecuted_blocks=1 00:11:29.735 00:11:29.735 ' 00:11:29.735 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:29.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.735 --rc genhtml_branch_coverage=1 00:11:29.735 --rc genhtml_function_coverage=1 00:11:29.736 --rc genhtml_legend=1 00:11:29.736 --rc geninfo_all_blocks=1 00:11:29.736 --rc geninfo_unexecuted_blocks=1 00:11:29.736 00:11:29.736 ' 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:29.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.736 --rc genhtml_branch_coverage=1 00:11:29.736 --rc genhtml_function_coverage=1 00:11:29.736 --rc genhtml_legend=1 00:11:29.736 --rc geninfo_all_blocks=1 00:11:29.736 --rc geninfo_unexecuted_blocks=1 00:11:29.736 00:11:29.736 ' 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:29.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.736 --rc genhtml_branch_coverage=1 00:11:29.736 --rc genhtml_function_coverage=1 00:11:29.736 --rc genhtml_legend=1 00:11:29.736 --rc geninfo_all_blocks=1 00:11:29.736 --rc geninfo_unexecuted_blocks=1 00:11:29.736 00:11:29.736 ' 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.736 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.960 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:37.961 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:37.961 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:37.961 Found net devices under 0000:31:00.0: cvl_0_0 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:37.961 Found net devices under 0000:31:00.1: cvl_0_1 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:37.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:11:37.961 00:11:37.961 --- 10.0.0.2 ping statistics --- 00:11:37.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.961 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:11:37.961 00:11:37.961 --- 10.0.0.1 ping statistics --- 00:11:37.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.961 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:37.961 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:37.962 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:37.962 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:37.962 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:37.962 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.962 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3293804 00:11:37.962 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3293804 00:11:37.962 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:37.962 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 3293804 ']' 00:11:37.962 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.962 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:37.962 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.962 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:37.962 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.962 [2024-11-20 07:25:55.481400] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:11:37.962 [2024-11-20 07:25:55.481471] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.962 [2024-11-20 07:25:55.583403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.962 [2024-11-20 07:25:55.637730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.962 [2024-11-20 07:25:55.637789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.962 [2024-11-20 07:25:55.637797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.962 [2024-11-20 07:25:55.637804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.962 [2024-11-20 07:25:55.637810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.962 [2024-11-20 07:25:55.640239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.962 [2024-11-20 07:25:55.640397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.962 [2024-11-20 07:25:55.640562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.962 [2024-11-20 07:25:55.640562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.223 [2024-11-20 07:25:56.368938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.223 [2024-11-20 07:25:56.392064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.223 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.484 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:38.746 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:39.008 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:39.269 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:39.269 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:39.269 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:39.269 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:39.269 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:39.269 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.269 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:39.269 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:39.269 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:39.269 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:39.269 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:39.532 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:39.793 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:39.793 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:39.793 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:39.793 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:39.793 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:39.794 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.794 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:40.054 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:40.054 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:40.054 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:40.054 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:40.054 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.054 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:40.314 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:40.315 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:40.315 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.315 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.315 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.315 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.315 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:40.315 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.315 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.315 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.315 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:40.315 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:40.315 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:40.315 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:40.315 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.315 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:40.315 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:40.576 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:40.576 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:40.576 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:40.576 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:40.576 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:40.576 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:40.576 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:40.577 rmmod nvme_tcp 00:11:40.577 rmmod nvme_fabrics 00:11:40.577 rmmod nvme_keyring 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3293804 ']' 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3293804 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 3293804 ']' 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 3293804 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3293804 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3293804' 00:11:40.577 killing process with pid 3293804 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 3293804 00:11:40.577 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 3293804 00:11:40.838 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:40.838 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:40.838 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:40.838 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:40.838 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:40.838 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:40.838 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:40.838 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:40.838 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:40.838 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.838 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.838 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.755 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:42.755 00:11:42.755 real 0m13.416s 00:11:42.755 user 0m15.891s 00:11:42.755 sys 0m6.669s 00:11:42.755 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:42.755 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.755 ************************************ 00:11:42.755 END TEST nvmf_referrals 00:11:42.755 ************************************ 00:11:42.755 07:26:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:42.755 07:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:42.755 07:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:42.755 07:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:43.017 ************************************ 00:11:43.017 START TEST nvmf_connect_disconnect 00:11:43.017 ************************************ 00:11:43.017 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:43.017 * Looking for test storage... 00:11:43.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.017 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:43.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.018 --rc genhtml_branch_coverage=1 00:11:43.018 --rc genhtml_function_coverage=1 00:11:43.018 --rc genhtml_legend=1 00:11:43.018 --rc geninfo_all_blocks=1 00:11:43.018 --rc geninfo_unexecuted_blocks=1 00:11:43.018 00:11:43.018 ' 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:43.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.018 --rc genhtml_branch_coverage=1 00:11:43.018 --rc genhtml_function_coverage=1 00:11:43.018 --rc genhtml_legend=1 00:11:43.018 --rc geninfo_all_blocks=1 00:11:43.018 --rc geninfo_unexecuted_blocks=1 00:11:43.018 00:11:43.018 ' 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:43.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.018 --rc genhtml_branch_coverage=1 00:11:43.018 --rc genhtml_function_coverage=1 00:11:43.018 --rc genhtml_legend=1 00:11:43.018 --rc geninfo_all_blocks=1 00:11:43.018 --rc geninfo_unexecuted_blocks=1 00:11:43.018 00:11:43.018 ' 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:43.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.018 --rc genhtml_branch_coverage=1 00:11:43.018 --rc genhtml_function_coverage=1 00:11:43.018 --rc genhtml_legend=1 00:11:43.018 --rc geninfo_all_blocks=1 00:11:43.018 --rc geninfo_unexecuted_blocks=1 00:11:43.018 00:11:43.018 ' 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.018 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:43.281 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.440 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.440 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:51.440 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:51.441 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:51.441 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:51.441 Found net devices under 0000:31:00.0: cvl_0_0 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:51.441 Found net devices under 0000:31:00.1: cvl_0_1 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.441 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:51.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:11:51.442 00:11:51.442 --- 10.0.0.2 ping statistics --- 00:11:51.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.442 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:11:51.442 00:11:51.442 --- 10.0.0.1 ping statistics --- 00:11:51.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.442 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3298926 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3298926 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 3298926 ']' 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:51.442 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.442 [2024-11-20 07:26:08.971463] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:11:51.442 [2024-11-20 07:26:08.971532] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.442 [2024-11-20 07:26:09.074023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.442 [2024-11-20 07:26:09.126579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.442 [2024-11-20 07:26:09.126632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.442 [2024-11-20 07:26:09.126641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.442 [2024-11-20 07:26:09.126648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.442 [2024-11-20 07:26:09.126654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.442 [2024-11-20 07:26:09.128789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.442 [2024-11-20 07:26:09.128907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.442 [2024-11-20 07:26:09.129066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.442 [2024-11-20 07:26:09.129068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.704 [2024-11-20 07:26:09.850633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.704 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.966 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.966 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:51.966 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.966 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.966 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.966 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.966 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.966 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:51.967 [2024-11-20 07:26:09.930216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.967 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.967 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:51.967 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:51.967 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:56.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:10.418 rmmod nvme_tcp 00:12:10.418 rmmod nvme_fabrics 00:12:10.418 rmmod nvme_keyring 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3298926 ']' 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3298926 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3298926 ']' 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 3298926 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3298926 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3298926' 00:12:10.418 killing process with pid 3298926 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 3298926 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 3298926 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:10.418 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:10.419 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:10.419 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:10.419 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:10.419 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:10.419 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:10.419 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:10.419 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:10.419 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.419 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.419 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.331 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:12.331 00:12:12.331 real 0m29.535s 00:12:12.331 user 1m19.146s 00:12:12.331 sys 0m7.245s 00:12:12.331 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:12.331 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:12.331 ************************************ 00:12:12.331 END TEST nvmf_connect_disconnect 00:12:12.331 ************************************ 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.592 ************************************ 00:12:12.592 START TEST nvmf_multitarget 00:12:12.592 ************************************ 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:12.592 * Looking for test storage... 00:12:12.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.592 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.854 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:12.854 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:12.854 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.854 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:12.854 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.854 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:12.854 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:12.854 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.854 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:12.854 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.854 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.854 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.854 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:12.854 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.854 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:12.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.854 --rc genhtml_branch_coverage=1 00:12:12.854 --rc genhtml_function_coverage=1 00:12:12.854 --rc genhtml_legend=1 00:12:12.854 --rc geninfo_all_blocks=1 00:12:12.854 --rc geninfo_unexecuted_blocks=1 00:12:12.854 00:12:12.854 ' 00:12:12.854 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:12.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.854 --rc genhtml_branch_coverage=1 00:12:12.854 --rc genhtml_function_coverage=1 00:12:12.854 --rc genhtml_legend=1 00:12:12.854 --rc geninfo_all_blocks=1 00:12:12.854 --rc geninfo_unexecuted_blocks=1 00:12:12.854 00:12:12.854 ' 00:12:12.854 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:12.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.854 --rc genhtml_branch_coverage=1 00:12:12.854 --rc genhtml_function_coverage=1 00:12:12.854 --rc genhtml_legend=1 00:12:12.854 --rc geninfo_all_blocks=1 00:12:12.854 --rc geninfo_unexecuted_blocks=1 00:12:12.854 00:12:12.854 ' 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:12.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.855 --rc genhtml_branch_coverage=1 00:12:12.855 --rc genhtml_function_coverage=1 00:12:12.855 --rc genhtml_legend=1 00:12:12.855 --rc geninfo_all_blocks=1 00:12:12.855 --rc geninfo_unexecuted_blocks=1 00:12:12.855 00:12:12.855 ' 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:12.855 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:21.001 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:21.001 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:21.001 Found net devices under 0000:31:00.0: cvl_0_0 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:21.001 Found net devices under 0000:31:00.1: cvl_0_1 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:21.001 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:21.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:12:21.002 00:12:21.002 --- 10.0.0.2 ping statistics --- 00:12:21.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.002 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:12:21.002 00:12:21.002 --- 10.0.0.1 ping statistics --- 00:12:21.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.002 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3306984 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3306984 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 3306984 ']' 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:21.002 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:21.002 [2024-11-20 07:26:38.618740] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:12:21.002 [2024-11-20 07:26:38.618826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.002 [2024-11-20 07:26:38.721931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.002 [2024-11-20 07:26:38.776545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.002 [2024-11-20 07:26:38.776601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.002 [2024-11-20 07:26:38.776610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.002 [2024-11-20 07:26:38.776618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.002 [2024-11-20 07:26:38.776624] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.002 [2024-11-20 07:26:38.778776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.002 [2024-11-20 07:26:38.778881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.002 [2024-11-20 07:26:38.779009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.002 [2024-11-20 07:26:38.779009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.262 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:21.262 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:12:21.262 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:21.262 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:21.262 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:21.522 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.522 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:21.522 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:21.522 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:21.522 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:21.522 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:21.522 "nvmf_tgt_1" 00:12:21.522 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:21.783 "nvmf_tgt_2" 00:12:21.783 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:21.783 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:21.783 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:21.783 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:22.044 true 00:12:22.044 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:22.044 true 00:12:22.044 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:22.044 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:22.305 rmmod nvme_tcp 00:12:22.305 rmmod nvme_fabrics 00:12:22.305 rmmod nvme_keyring 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3306984 ']' 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3306984 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 3306984 ']' 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 3306984 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3306984 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3306984' 00:12:22.305 killing process with pid 3306984 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 3306984 00:12:22.305 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 3306984 00:12:22.566 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:22.566 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:22.566 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:22.566 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:22.566 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:22.566 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:22.566 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:22.566 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:22.566 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:22.566 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.566 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.566 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:25.109 00:12:25.109 real 0m12.075s 00:12:25.109 user 0m10.386s 00:12:25.109 sys 0m6.293s 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:25.109 ************************************ 00:12:25.109 END TEST nvmf_multitarget 00:12:25.109 ************************************ 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:25.109 ************************************ 00:12:25.109 START TEST nvmf_rpc 00:12:25.109 ************************************ 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:25.109 * Looking for test storage... 00:12:25.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.109 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:25.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.110 --rc genhtml_branch_coverage=1 00:12:25.110 --rc genhtml_function_coverage=1 00:12:25.110 --rc genhtml_legend=1 00:12:25.110 --rc geninfo_all_blocks=1 00:12:25.110 --rc geninfo_unexecuted_blocks=1 00:12:25.110 00:12:25.110 ' 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:25.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.110 --rc genhtml_branch_coverage=1 00:12:25.110 --rc genhtml_function_coverage=1 00:12:25.110 --rc genhtml_legend=1 00:12:25.110 --rc geninfo_all_blocks=1 00:12:25.110 --rc geninfo_unexecuted_blocks=1 00:12:25.110 00:12:25.110 ' 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:25.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.110 --rc genhtml_branch_coverage=1 00:12:25.110 --rc genhtml_function_coverage=1 00:12:25.110 --rc genhtml_legend=1 00:12:25.110 --rc geninfo_all_blocks=1 00:12:25.110 --rc geninfo_unexecuted_blocks=1 00:12:25.110 00:12:25.110 ' 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:25.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.110 --rc genhtml_branch_coverage=1 00:12:25.110 --rc genhtml_function_coverage=1 00:12:25.110 --rc genhtml_legend=1 00:12:25.110 --rc geninfo_all_blocks=1 00:12:25.110 --rc geninfo_unexecuted_blocks=1 00:12:25.110 00:12:25.110 ' 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.110 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:25.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:25.110 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:33.251 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:33.251 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:33.251 Found net devices under 0000:31:00.0: cvl_0_0 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:33.251 Found net devices under 0000:31:00.1: cvl_0_1 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.251 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:33.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:12:33.252 00:12:33.252 --- 10.0.0.2 ping statistics --- 00:12:33.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.252 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:12:33.252 00:12:33.252 --- 10.0.0.1 ping statistics --- 00:12:33.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.252 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3311536 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3311536 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 3311536 ']' 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:33.252 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.252 [2024-11-20 07:26:50.782923] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:12:33.252 [2024-11-20 07:26:50.782991] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.252 [2024-11-20 07:26:50.885194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.252 [2024-11-20 07:26:50.938484] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.252 [2024-11-20 07:26:50.938540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.252 [2024-11-20 07:26:50.938549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.252 [2024-11-20 07:26:50.938556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.252 [2024-11-20 07:26:50.938562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.252 [2024-11-20 07:26:50.940696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.252 [2024-11-20 07:26:50.940856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.252 [2024-11-20 07:26:50.940909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.252 [2024-11-20 07:26:50.940910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.513 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:33.513 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:33.513 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:33.513 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:33.513 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.513 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.513 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:33.513 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.513 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.513 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.513 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:33.513 "tick_rate": 2400000000, 00:12:33.513 "poll_groups": [ 00:12:33.513 { 00:12:33.513 "name": "nvmf_tgt_poll_group_000", 00:12:33.513 "admin_qpairs": 0, 00:12:33.513 "io_qpairs": 0, 00:12:33.513 "current_admin_qpairs": 0, 00:12:33.513 "current_io_qpairs": 0, 00:12:33.513 "pending_bdev_io": 0, 00:12:33.513 "completed_nvme_io": 0, 00:12:33.513 "transports": [] 00:12:33.513 }, 00:12:33.513 { 00:12:33.513 "name": "nvmf_tgt_poll_group_001", 00:12:33.513 "admin_qpairs": 0, 00:12:33.513 "io_qpairs": 0, 00:12:33.513 "current_admin_qpairs": 0, 00:12:33.513 "current_io_qpairs": 0, 00:12:33.513 "pending_bdev_io": 0, 00:12:33.513 "completed_nvme_io": 0, 00:12:33.513 "transports": [] 00:12:33.513 }, 00:12:33.513 { 00:12:33.513 "name": "nvmf_tgt_poll_group_002", 00:12:33.513 "admin_qpairs": 0, 00:12:33.513 "io_qpairs": 0, 00:12:33.513 "current_admin_qpairs": 0, 00:12:33.513 "current_io_qpairs": 0, 00:12:33.513 "pending_bdev_io": 0, 00:12:33.513 "completed_nvme_io": 0, 00:12:33.513 "transports": [] 00:12:33.513 }, 00:12:33.513 { 00:12:33.513 "name": "nvmf_tgt_poll_group_003", 00:12:33.513 "admin_qpairs": 0, 00:12:33.513 "io_qpairs": 0, 00:12:33.513 "current_admin_qpairs": 0, 00:12:33.513 "current_io_qpairs": 0, 00:12:33.513 "pending_bdev_io": 0, 00:12:33.513 "completed_nvme_io": 0, 00:12:33.513 "transports": [] 00:12:33.513 } 00:12:33.513 ] 00:12:33.513 }' 00:12:33.513 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:33.513 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:33.513 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:33.513 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:33.774 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:33.774 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:33.774 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:33.774 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.774 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.774 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.775 [2024-11-20 07:26:51.775759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:33.775 "tick_rate": 2400000000, 00:12:33.775 "poll_groups": [ 00:12:33.775 { 00:12:33.775 "name": "nvmf_tgt_poll_group_000", 00:12:33.775 "admin_qpairs": 0, 00:12:33.775 "io_qpairs": 0, 00:12:33.775 "current_admin_qpairs": 0, 00:12:33.775 "current_io_qpairs": 0, 00:12:33.775 "pending_bdev_io": 0, 00:12:33.775 "completed_nvme_io": 0, 00:12:33.775 "transports": [ 00:12:33.775 { 00:12:33.775 "trtype": "TCP" 00:12:33.775 } 00:12:33.775 ] 00:12:33.775 }, 00:12:33.775 { 00:12:33.775 "name": "nvmf_tgt_poll_group_001", 00:12:33.775 "admin_qpairs": 0, 00:12:33.775 "io_qpairs": 0, 00:12:33.775 "current_admin_qpairs": 0, 00:12:33.775 "current_io_qpairs": 0, 00:12:33.775 "pending_bdev_io": 0, 00:12:33.775 "completed_nvme_io": 0, 00:12:33.775 "transports": [ 00:12:33.775 { 00:12:33.775 "trtype": "TCP" 00:12:33.775 } 00:12:33.775 ] 00:12:33.775 }, 00:12:33.775 { 00:12:33.775 "name": "nvmf_tgt_poll_group_002", 00:12:33.775 "admin_qpairs": 0, 00:12:33.775 "io_qpairs": 0, 00:12:33.775 "current_admin_qpairs": 0, 00:12:33.775 "current_io_qpairs": 0, 00:12:33.775 "pending_bdev_io": 0, 00:12:33.775 "completed_nvme_io": 0, 00:12:33.775 "transports": [ 00:12:33.775 { 00:12:33.775 "trtype": "TCP" 00:12:33.775 } 00:12:33.775 ] 00:12:33.775 }, 00:12:33.775 { 00:12:33.775 "name": "nvmf_tgt_poll_group_003", 00:12:33.775 "admin_qpairs": 0, 00:12:33.775 "io_qpairs": 0, 00:12:33.775 "current_admin_qpairs": 0, 00:12:33.775 "current_io_qpairs": 0, 00:12:33.775 "pending_bdev_io": 0, 00:12:33.775 "completed_nvme_io": 0, 00:12:33.775 "transports": [ 00:12:33.775 { 00:12:33.775 "trtype": "TCP" 00:12:33.775 } 00:12:33.775 ] 00:12:33.775 } 00:12:33.775 ] 00:12:33.775 }' 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.775 Malloc1 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.775 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.037 [2024-11-20 07:26:51.983578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.037 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.037 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:12:34.037 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:34.037 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:12:34.037 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:34.037 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.037 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:34.037 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.037 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:34.037 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.037 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:34.037 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:34.037 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:12:34.037 [2024-11-20 07:26:52.020519] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:12:34.037 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:34.037 could not add new controller: failed to write to nvme-fabrics device 00:12:34.037 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:34.037 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:34.037 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:34.037 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:34.037 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:34.037 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.037 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.037 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.037 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.952 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.952 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:35.952 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.952 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:35.952 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.867 [2024-11-20 07:26:55.808204] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:12:37.867 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:37.867 could not add new controller: failed to write to nvme-fabrics device 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.867 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.253 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.253 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:39.253 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.253 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:39.253 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.794 [2024-11-20 07:26:59.560175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.794 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.181 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.181 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:43.181 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.181 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:43.181 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.095 [2024-11-20 07:27:03.269360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.095 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.009 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:47.009 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:47.009 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.009 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:47.009 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.921 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.921 [2024-11-20 07:27:07.024941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.921 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.832 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.832 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:50.832 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.832 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:50.832 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:52.743 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:52.743 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:52.743 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.743 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:52.743 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.743 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:52.743 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.743 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.743 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:52.743 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:52.743 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.743 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:52.743 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.743 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.744 [2024-11-20 07:27:10.786588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.744 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.125 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:54.125 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:54.125 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.125 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:54.125 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.667 [2024-11-20 07:27:14.502968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.667 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.050 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.050 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:58.050 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.050 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:58.050 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:59.961 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:59.961 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:59.961 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.961 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:59.961 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.961 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:59.961 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.961 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.961 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:59.961 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:59.961 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.961 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:59.961 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.222 [2024-11-20 07:27:18.219672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.222 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.223 [2024-11-20 07:27:18.279784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.223 [2024-11-20 07:27:18.347978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.223 [2024-11-20 07:27:18.416164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.223 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.484 [2024-11-20 07:27:18.484396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.484 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:00.485 "tick_rate": 2400000000, 00:13:00.485 "poll_groups": [ 00:13:00.485 { 00:13:00.485 "name": "nvmf_tgt_poll_group_000", 00:13:00.485 "admin_qpairs": 0, 00:13:00.485 "io_qpairs": 224, 00:13:00.485 "current_admin_qpairs": 0, 00:13:00.485 "current_io_qpairs": 0, 00:13:00.485 "pending_bdev_io": 0, 00:13:00.485 "completed_nvme_io": 259, 00:13:00.485 "transports": [ 00:13:00.485 { 00:13:00.485 "trtype": "TCP" 00:13:00.485 } 00:13:00.485 ] 00:13:00.485 }, 00:13:00.485 { 00:13:00.485 "name": "nvmf_tgt_poll_group_001", 00:13:00.485 "admin_qpairs": 1, 00:13:00.485 "io_qpairs": 223, 00:13:00.485 "current_admin_qpairs": 0, 00:13:00.485 "current_io_qpairs": 0, 00:13:00.485 "pending_bdev_io": 0, 00:13:00.485 "completed_nvme_io": 224, 00:13:00.485 "transports": [ 00:13:00.485 { 00:13:00.485 "trtype": "TCP" 00:13:00.485 } 00:13:00.485 ] 00:13:00.485 }, 00:13:00.485 { 00:13:00.485 "name": "nvmf_tgt_poll_group_002", 00:13:00.485 "admin_qpairs": 6, 00:13:00.485 "io_qpairs": 218, 00:13:00.485 "current_admin_qpairs": 0, 00:13:00.485 "current_io_qpairs": 0, 00:13:00.485 "pending_bdev_io": 0, 00:13:00.485 "completed_nvme_io": 272, 00:13:00.485 "transports": [ 00:13:00.485 { 00:13:00.485 "trtype": "TCP" 00:13:00.485 } 00:13:00.485 ] 00:13:00.485 }, 00:13:00.485 { 00:13:00.485 "name": "nvmf_tgt_poll_group_003", 00:13:00.485 "admin_qpairs": 0, 00:13:00.485 "io_qpairs": 224, 00:13:00.485 "current_admin_qpairs": 0, 00:13:00.485 "current_io_qpairs": 0, 00:13:00.485 "pending_bdev_io": 0, 00:13:00.485 "completed_nvme_io": 484, 00:13:00.485 "transports": [ 00:13:00.485 { 00:13:00.485 "trtype": "TCP" 00:13:00.485 } 00:13:00.485 ] 00:13:00.485 } 00:13:00.485 ] 00:13:00.485 }' 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:00.485 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:00.485 rmmod nvme_tcp 00:13:00.485 rmmod nvme_fabrics 00:13:00.747 rmmod nvme_keyring 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3311536 ']' 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3311536 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 3311536 ']' 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 3311536 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3311536 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3311536' 00:13:00.747 killing process with pid 3311536 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 3311536 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 3311536 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.747 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.292 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:03.292 00:13:03.292 real 0m38.206s 00:13:03.292 user 1m53.790s 00:13:03.292 sys 0m8.062s 00:13:03.292 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:03.292 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.292 ************************************ 00:13:03.292 END TEST nvmf_rpc 00:13:03.292 ************************************ 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:03.292 ************************************ 00:13:03.292 START TEST nvmf_invalid 00:13:03.292 ************************************ 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:03.292 * Looking for test storage... 00:13:03.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:03.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.292 --rc genhtml_branch_coverage=1 00:13:03.292 --rc genhtml_function_coverage=1 00:13:03.292 --rc genhtml_legend=1 00:13:03.292 --rc geninfo_all_blocks=1 00:13:03.292 --rc geninfo_unexecuted_blocks=1 00:13:03.292 00:13:03.292 ' 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:03.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.292 --rc genhtml_branch_coverage=1 00:13:03.292 --rc genhtml_function_coverage=1 00:13:03.292 --rc genhtml_legend=1 00:13:03.292 --rc geninfo_all_blocks=1 00:13:03.292 --rc geninfo_unexecuted_blocks=1 00:13:03.292 00:13:03.292 ' 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:03.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.292 --rc genhtml_branch_coverage=1 00:13:03.292 --rc genhtml_function_coverage=1 00:13:03.292 --rc genhtml_legend=1 00:13:03.292 --rc geninfo_all_blocks=1 00:13:03.292 --rc geninfo_unexecuted_blocks=1 00:13:03.292 00:13:03.292 ' 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:03.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.292 --rc genhtml_branch_coverage=1 00:13:03.292 --rc genhtml_function_coverage=1 00:13:03.292 --rc genhtml_legend=1 00:13:03.292 --rc geninfo_all_blocks=1 00:13:03.292 --rc geninfo_unexecuted_blocks=1 00:13:03.292 00:13:03.292 ' 00:13:03.292 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:03.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:03.293 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:11.435 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:11.435 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:11.435 Found net devices under 0000:31:00.0: cvl_0_0 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:11.435 Found net devices under 0000:31:00.1: cvl_0_1 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:11.435 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:11.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:13:11.435 00:13:11.435 --- 10.0.0.2 ping statistics --- 00:13:11.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.435 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:11.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:13:11.436 00:13:11.436 --- 10.0.0.1 ping statistics --- 00:13:11.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.436 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3321956 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3321956 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 3321956 ']' 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:11.436 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:11.436 [2024-11-20 07:27:29.033292] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:13:11.436 [2024-11-20 07:27:29.033361] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.436 [2024-11-20 07:27:29.134653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.436 [2024-11-20 07:27:29.187152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.436 [2024-11-20 07:27:29.187206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.436 [2024-11-20 07:27:29.187215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.436 [2024-11-20 07:27:29.187223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.436 [2024-11-20 07:27:29.187234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.436 [2024-11-20 07:27:29.189715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.436 [2024-11-20 07:27:29.189878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.436 [2024-11-20 07:27:29.190186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.436 [2024-11-20 07:27:29.190190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.696 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:11.696 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:13:11.696 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:11.696 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:11.696 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:11.696 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.696 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:11.696 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5278 00:13:11.958 [2024-11-20 07:27:30.068800] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:11.958 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:11.958 { 00:13:11.958 "nqn": "nqn.2016-06.io.spdk:cnode5278", 00:13:11.958 "tgt_name": "foobar", 00:13:11.958 "method": "nvmf_create_subsystem", 00:13:11.958 "req_id": 1 00:13:11.958 } 00:13:11.958 Got JSON-RPC error response 00:13:11.958 response: 00:13:11.958 { 00:13:11.958 "code": -32603, 00:13:11.958 "message": "Unable to find target foobar" 00:13:11.958 }' 00:13:11.958 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:11.958 { 00:13:11.958 "nqn": "nqn.2016-06.io.spdk:cnode5278", 00:13:11.958 "tgt_name": "foobar", 00:13:11.958 "method": "nvmf_create_subsystem", 00:13:11.958 "req_id": 1 00:13:11.958 } 00:13:11.958 Got JSON-RPC error response 00:13:11.958 response: 00:13:11.958 { 00:13:11.958 "code": -32603, 00:13:11.958 "message": "Unable to find target foobar" 00:13:11.958 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:11.958 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:11.958 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4151 00:13:12.219 [2024-11-20 07:27:30.277702] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4151: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:12.219 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:12.219 { 00:13:12.219 "nqn": "nqn.2016-06.io.spdk:cnode4151", 00:13:12.219 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:12.219 "method": "nvmf_create_subsystem", 00:13:12.219 "req_id": 1 00:13:12.219 } 00:13:12.219 Got JSON-RPC error response 00:13:12.219 response: 00:13:12.219 { 00:13:12.219 "code": -32602, 00:13:12.219 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:12.219 }' 00:13:12.219 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:12.219 { 00:13:12.219 "nqn": "nqn.2016-06.io.spdk:cnode4151", 00:13:12.219 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:12.219 "method": "nvmf_create_subsystem", 00:13:12.219 "req_id": 1 00:13:12.219 } 00:13:12.219 Got JSON-RPC error response 00:13:12.219 response: 00:13:12.219 { 00:13:12.219 "code": -32602, 00:13:12.219 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:12.219 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:12.219 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:12.219 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10258 00:13:12.480 [2024-11-20 07:27:30.486426] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10258: invalid model number 'SPDK_Controller' 00:13:12.480 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:12.480 { 00:13:12.481 "nqn": "nqn.2016-06.io.spdk:cnode10258", 00:13:12.481 "model_number": "SPDK_Controller\u001f", 00:13:12.481 "method": "nvmf_create_subsystem", 00:13:12.481 "req_id": 1 00:13:12.481 } 00:13:12.481 Got JSON-RPC error response 00:13:12.481 response: 00:13:12.481 { 00:13:12.481 "code": -32602, 00:13:12.481 "message": "Invalid MN SPDK_Controller\u001f" 00:13:12.481 }' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:12.481 { 00:13:12.481 "nqn": "nqn.2016-06.io.spdk:cnode10258", 00:13:12.481 "model_number": "SPDK_Controller\u001f", 00:13:12.481 "method": "nvmf_create_subsystem", 00:13:12.481 "req_id": 1 00:13:12.481 } 00:13:12.481 Got JSON-RPC error response 00:13:12.481 response: 00:13:12.481 { 00:13:12.481 "code": -32602, 00:13:12.481 "message": "Invalid MN SPDK_Controller\u001f" 00:13:12.481 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:12.481 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.482 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.742 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:12.742 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:12.742 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:12.742 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.742 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.742 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:12.742 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:12.742 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:12.742 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.742 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ % == \- ]] 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '%uU^,H90[ikhyU( $u"cm' 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '%uU^,H90[ikhyU( $u"cm' nqn.2016-06.io.spdk:cnode32252 00:13:12.743 [2024-11-20 07:27:30.871858] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32252: invalid serial number '%uU^,H90[ikhyU( $u"cm' 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:12.743 { 00:13:12.743 "nqn": "nqn.2016-06.io.spdk:cnode32252", 00:13:12.743 "serial_number": "%uU^,H90[ikhyU( $u\"cm", 00:13:12.743 "method": "nvmf_create_subsystem", 00:13:12.743 "req_id": 1 00:13:12.743 } 00:13:12.743 Got JSON-RPC error response 00:13:12.743 response: 00:13:12.743 { 00:13:12.743 "code": -32602, 00:13:12.743 "message": "Invalid SN %uU^,H90[ikhyU( $u\"cm" 00:13:12.743 }' 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:12.743 { 00:13:12.743 "nqn": "nqn.2016-06.io.spdk:cnode32252", 00:13:12.743 "serial_number": "%uU^,H90[ikhyU( $u\"cm", 00:13:12.743 "method": "nvmf_create_subsystem", 00:13:12.743 "req_id": 1 00:13:12.743 } 00:13:12.743 Got JSON-RPC error response 00:13:12.743 response: 00:13:12.743 { 00:13:12.743 "code": -32602, 00:13:12.743 "message": "Invalid SN %uU^,H90[ikhyU( $u\"cm" 00:13:12.743 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:12.743 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:13.005 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.006 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ q == \- ]] 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'q/1)e7=R48'\''FwAS_j!IBTwinU$4'\''3$ p=F{6L$2' 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'q/1)e7=R48'\''FwAS_j!IBTwinU$4'\''3$ p=F{6L$2' nqn.2016-06.io.spdk:cnode3709 00:13:13.267 [2024-11-20 07:27:31.389724] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3709: invalid model number 'q/1)e7=R48'FwAS_j!IBTwinU$4'3$ p=F{6L$2' 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:13.267 { 00:13:13.267 "nqn": "nqn.2016-06.io.spdk:cnode3709", 00:13:13.267 "model_number": "q/1)e7=R48'\''FwAS_j!IBTwinU$4'\''3$\u007f p=F{6L\u007f$2", 00:13:13.267 "method": "nvmf_create_subsystem", 00:13:13.267 "req_id": 1 00:13:13.267 } 00:13:13.267 Got JSON-RPC error response 00:13:13.267 response: 00:13:13.267 { 00:13:13.267 "code": -32602, 00:13:13.267 "message": "Invalid MN q/1)e7=R48'\''FwAS_j!IBTwinU$4'\''3$\u007f p=F{6L\u007f$2" 00:13:13.267 }' 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:13.267 { 00:13:13.267 "nqn": "nqn.2016-06.io.spdk:cnode3709", 00:13:13.267 "model_number": "q/1)e7=R48'FwAS_j!IBTwinU$4'3$\u007f p=F{6L\u007f$2", 00:13:13.267 "method": "nvmf_create_subsystem", 00:13:13.267 "req_id": 1 00:13:13.267 } 00:13:13.267 Got JSON-RPC error response 00:13:13.267 response: 00:13:13.267 { 00:13:13.267 "code": -32602, 00:13:13.267 "message": "Invalid MN q/1)e7=R48'FwAS_j!IBTwinU$4'3$\u007f p=F{6L\u007f$2" 00:13:13.267 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:13.267 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:13.528 [2024-11-20 07:27:31.578427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.528 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:13.789 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:13.789 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:13.789 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:13.789 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:13.789 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:13.789 [2024-11-20 07:27:31.961034] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:13.789 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:13.789 { 00:13:13.789 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:13.789 "listen_address": { 00:13:13.789 "trtype": "tcp", 00:13:13.789 "traddr": "", 00:13:13.789 "trsvcid": "4421" 00:13:13.789 }, 00:13:13.789 "method": "nvmf_subsystem_remove_listener", 00:13:13.789 "req_id": 1 00:13:13.789 } 00:13:13.789 Got JSON-RPC error response 00:13:13.789 response: 00:13:13.789 { 00:13:13.789 "code": -32602, 00:13:13.789 "message": "Invalid parameters" 00:13:13.789 }' 00:13:13.789 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:13.789 { 00:13:13.789 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:13.789 "listen_address": { 00:13:13.789 "trtype": "tcp", 00:13:13.789 "traddr": "", 00:13:13.789 "trsvcid": "4421" 00:13:13.789 }, 00:13:13.789 "method": "nvmf_subsystem_remove_listener", 00:13:13.789 "req_id": 1 00:13:13.789 } 00:13:13.789 Got JSON-RPC error response 00:13:13.789 response: 00:13:13.789 { 00:13:13.789 "code": -32602, 00:13:13.789 "message": "Invalid parameters" 00:13:13.789 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:14.050 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1144 -i 0 00:13:14.051 [2024-11-20 07:27:32.149608] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1144: invalid cntlid range [0-65519] 00:13:14.051 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:14.051 { 00:13:14.051 "nqn": "nqn.2016-06.io.spdk:cnode1144", 00:13:14.051 "min_cntlid": 0, 00:13:14.051 "method": "nvmf_create_subsystem", 00:13:14.051 "req_id": 1 00:13:14.051 } 00:13:14.051 Got JSON-RPC error response 00:13:14.051 response: 00:13:14.051 { 00:13:14.051 "code": -32602, 00:13:14.051 "message": "Invalid cntlid range [0-65519]" 00:13:14.051 }' 00:13:14.051 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:14.051 { 00:13:14.051 "nqn": "nqn.2016-06.io.spdk:cnode1144", 00:13:14.051 "min_cntlid": 0, 00:13:14.051 "method": "nvmf_create_subsystem", 00:13:14.051 "req_id": 1 00:13:14.051 } 00:13:14.051 Got JSON-RPC error response 00:13:14.051 response: 00:13:14.051 { 00:13:14.051 "code": -32602, 00:13:14.051 "message": "Invalid cntlid range [0-65519]" 00:13:14.051 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:14.051 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7838 -i 65520 00:13:14.311 [2024-11-20 07:27:32.338258] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7838: invalid cntlid range [65520-65519] 00:13:14.311 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:14.311 { 00:13:14.311 "nqn": "nqn.2016-06.io.spdk:cnode7838", 00:13:14.311 "min_cntlid": 65520, 00:13:14.311 "method": "nvmf_create_subsystem", 00:13:14.311 "req_id": 1 00:13:14.311 } 00:13:14.311 Got JSON-RPC error response 00:13:14.311 response: 00:13:14.311 { 00:13:14.311 "code": -32602, 00:13:14.311 "message": "Invalid cntlid range [65520-65519]" 00:13:14.311 }' 00:13:14.311 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:14.311 { 00:13:14.311 "nqn": "nqn.2016-06.io.spdk:cnode7838", 00:13:14.311 "min_cntlid": 65520, 00:13:14.311 "method": "nvmf_create_subsystem", 00:13:14.311 "req_id": 1 00:13:14.311 } 00:13:14.311 Got JSON-RPC error response 00:13:14.311 response: 00:13:14.311 { 00:13:14.311 "code": -32602, 00:13:14.311 "message": "Invalid cntlid range [65520-65519]" 00:13:14.311 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:14.311 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23413 -I 0 00:13:14.572 [2024-11-20 07:27:32.526816] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23413: invalid cntlid range [1-0] 00:13:14.572 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:14.572 { 00:13:14.572 "nqn": "nqn.2016-06.io.spdk:cnode23413", 00:13:14.572 "max_cntlid": 0, 00:13:14.572 "method": "nvmf_create_subsystem", 00:13:14.572 "req_id": 1 00:13:14.572 } 00:13:14.572 Got JSON-RPC error response 00:13:14.572 response: 00:13:14.572 { 00:13:14.572 "code": -32602, 00:13:14.572 "message": "Invalid cntlid range [1-0]" 00:13:14.572 }' 00:13:14.572 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:14.572 { 00:13:14.572 "nqn": "nqn.2016-06.io.spdk:cnode23413", 00:13:14.572 "max_cntlid": 0, 00:13:14.572 "method": "nvmf_create_subsystem", 00:13:14.572 "req_id": 1 00:13:14.572 } 00:13:14.572 Got JSON-RPC error response 00:13:14.572 response: 00:13:14.572 { 00:13:14.572 "code": -32602, 00:13:14.572 "message": "Invalid cntlid range [1-0]" 00:13:14.572 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:14.573 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2940 -I 65520 00:13:14.573 [2024-11-20 07:27:32.715384] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2940: invalid cntlid range [1-65520] 00:13:14.573 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:14.573 { 00:13:14.573 "nqn": "nqn.2016-06.io.spdk:cnode2940", 00:13:14.573 "max_cntlid": 65520, 00:13:14.573 "method": "nvmf_create_subsystem", 00:13:14.573 "req_id": 1 00:13:14.573 } 00:13:14.573 Got JSON-RPC error response 00:13:14.573 response: 00:13:14.573 { 00:13:14.573 "code": -32602, 00:13:14.573 "message": "Invalid cntlid range [1-65520]" 00:13:14.573 }' 00:13:14.573 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:14.573 { 00:13:14.573 "nqn": "nqn.2016-06.io.spdk:cnode2940", 00:13:14.573 "max_cntlid": 65520, 00:13:14.573 "method": "nvmf_create_subsystem", 00:13:14.573 "req_id": 1 00:13:14.573 } 00:13:14.573 Got JSON-RPC error response 00:13:14.573 response: 00:13:14.573 { 00:13:14.573 "code": -32602, 00:13:14.573 "message": "Invalid cntlid range [1-65520]" 00:13:14.573 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:14.573 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23372 -i 6 -I 5 00:13:14.833 [2024-11-20 07:27:32.899983] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23372: invalid cntlid range [6-5] 00:13:14.833 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:14.833 { 00:13:14.833 "nqn": "nqn.2016-06.io.spdk:cnode23372", 00:13:14.833 "min_cntlid": 6, 00:13:14.833 "max_cntlid": 5, 00:13:14.833 "method": "nvmf_create_subsystem", 00:13:14.833 "req_id": 1 00:13:14.833 } 00:13:14.833 Got JSON-RPC error response 00:13:14.833 response: 00:13:14.833 { 00:13:14.833 "code": -32602, 00:13:14.833 "message": "Invalid cntlid range [6-5]" 00:13:14.833 }' 00:13:14.833 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:14.833 { 00:13:14.833 "nqn": "nqn.2016-06.io.spdk:cnode23372", 00:13:14.833 "min_cntlid": 6, 00:13:14.833 "max_cntlid": 5, 00:13:14.833 "method": "nvmf_create_subsystem", 00:13:14.834 "req_id": 1 00:13:14.834 } 00:13:14.834 Got JSON-RPC error response 00:13:14.834 response: 00:13:14.834 { 00:13:14.834 "code": -32602, 00:13:14.834 "message": "Invalid cntlid range [6-5]" 00:13:14.834 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:14.834 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:14.834 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:14.834 { 00:13:14.834 "name": "foobar", 00:13:14.834 "method": "nvmf_delete_target", 00:13:14.834 "req_id": 1 00:13:14.834 } 00:13:14.834 Got JSON-RPC error response 00:13:14.834 response: 00:13:14.834 { 00:13:14.834 "code": -32602, 00:13:14.834 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:14.834 }' 00:13:14.834 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:14.834 { 00:13:14.834 "name": "foobar", 00:13:14.834 "method": "nvmf_delete_target", 00:13:14.834 "req_id": 1 00:13:14.834 } 00:13:14.834 Got JSON-RPC error response 00:13:14.834 response: 00:13:14.834 { 00:13:14.834 "code": -32602, 00:13:14.834 "message": "The specified target doesn't exist, cannot delete it." 00:13:14.834 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:14.834 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:14.834 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:14.834 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:15.094 rmmod nvme_tcp 00:13:15.094 rmmod nvme_fabrics 00:13:15.094 rmmod nvme_keyring 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3321956 ']' 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3321956 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 3321956 ']' 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 3321956 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3321956 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:15.094 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3321956' 00:13:15.095 killing process with pid 3321956 00:13:15.095 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 3321956 00:13:15.095 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 3321956 00:13:15.095 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:15.095 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:15.095 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:15.095 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:15.095 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:15.095 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:15.095 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:15.095 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:15.095 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:15.095 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.095 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.095 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.641 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:17.641 00:13:17.641 real 0m14.294s 00:13:17.641 user 0m21.002s 00:13:17.641 sys 0m6.845s 00:13:17.641 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:17.641 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:17.641 ************************************ 00:13:17.641 END TEST nvmf_invalid 00:13:17.641 ************************************ 00:13:17.641 07:27:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:17.641 07:27:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:17.641 07:27:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:17.641 07:27:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:17.641 ************************************ 00:13:17.641 START TEST nvmf_connect_stress 00:13:17.641 ************************************ 00:13:17.641 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:17.641 * Looking for test storage... 00:13:17.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.641 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:17.641 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:17.641 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:17.641 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:17.641 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:17.641 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:17.641 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:17.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.642 --rc genhtml_branch_coverage=1 00:13:17.642 --rc genhtml_function_coverage=1 00:13:17.642 --rc genhtml_legend=1 00:13:17.642 --rc geninfo_all_blocks=1 00:13:17.642 --rc geninfo_unexecuted_blocks=1 00:13:17.642 00:13:17.642 ' 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:17.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.642 --rc genhtml_branch_coverage=1 00:13:17.642 --rc genhtml_function_coverage=1 00:13:17.642 --rc genhtml_legend=1 00:13:17.642 --rc geninfo_all_blocks=1 00:13:17.642 --rc geninfo_unexecuted_blocks=1 00:13:17.642 00:13:17.642 ' 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:17.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.642 --rc genhtml_branch_coverage=1 00:13:17.642 --rc genhtml_function_coverage=1 00:13:17.642 --rc genhtml_legend=1 00:13:17.642 --rc geninfo_all_blocks=1 00:13:17.642 --rc geninfo_unexecuted_blocks=1 00:13:17.642 00:13:17.642 ' 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:17.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.642 --rc genhtml_branch_coverage=1 00:13:17.642 --rc genhtml_function_coverage=1 00:13:17.642 --rc genhtml_legend=1 00:13:17.642 --rc geninfo_all_blocks=1 00:13:17.642 --rc geninfo_unexecuted_blocks=1 00:13:17.642 00:13:17.642 ' 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.642 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:17.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:17.643 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.787 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:25.788 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:25.788 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:25.788 Found net devices under 0000:31:00.0: cvl_0_0 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:25.788 Found net devices under 0000:31:00.1: cvl_0_1 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.788 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.789 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:25.789 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:25.789 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:25.789 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:25.789 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:25.789 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:25.789 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:25.789 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.789 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:25.789 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:25.789 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:25.789 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:25.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:13:25.789 00:13:25.789 --- 10.0.0.2 ping statistics --- 00:13:25.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.789 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:25.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:13:25.789 00:13:25.789 --- 10.0.0.1 ping statistics --- 00:13:25.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.789 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3327168 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3327168 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 3327168 ']' 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:25.789 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.789 [2024-11-20 07:27:43.387864] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:13:25.789 [2024-11-20 07:27:43.387930] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.789 [2024-11-20 07:27:43.489601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:25.789 [2024-11-20 07:27:43.540616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.789 [2024-11-20 07:27:43.540666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.789 [2024-11-20 07:27:43.540675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.789 [2024-11-20 07:27:43.540683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.789 [2024-11-20 07:27:43.540689] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.789 [2024-11-20 07:27:43.542816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.789 [2024-11-20 07:27:43.543010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.789 [2024-11-20 07:27:43.543011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.051 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:26.051 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:13:26.051 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:26.051 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:26.051 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.051 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.051 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.051 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.051 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.051 [2024-11-20 07:27:44.239794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.051 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.051 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.051 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.051 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.312 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.312 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.312 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.312 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.312 [2024-11-20 07:27:44.265380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.312 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.312 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:26.312 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.313 NULL1 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3327480 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.313 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.575 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.575 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:26.575 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.575 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.575 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.146 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.146 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:27.146 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.146 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.146 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.407 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.407 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:27.407 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.407 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.407 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.668 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.668 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:27.668 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.668 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.668 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.931 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.931 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:27.931 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.931 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.931 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.192 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.192 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:28.192 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.192 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.192 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.764 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.764 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:28.764 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.764 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.764 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.025 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.025 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:29.025 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.025 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.025 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.285 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.285 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:29.285 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.285 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.285 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.545 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.545 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:29.545 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.545 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.545 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.805 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.805 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:29.805 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.805 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.805 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.377 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.377 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:30.377 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.377 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.377 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.637 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.637 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:30.637 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.637 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.637 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.898 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.898 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:30.898 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.898 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.898 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.160 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.160 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:31.160 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.160 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.160 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.421 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.421 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:31.421 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.421 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.421 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.990 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.990 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:31.990 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.990 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.990 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.250 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.250 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:32.250 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.250 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.250 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.510 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.510 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:32.510 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.510 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.510 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.772 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.772 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:32.772 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.772 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.772 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.033 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.033 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:33.033 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.033 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.033 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.604 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.604 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:33.604 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.604 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.604 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.911 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.911 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:33.911 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.911 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.911 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.171 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.171 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:34.171 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.171 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.171 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.432 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.432 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:34.432 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.432 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.432 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.692 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.692 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:34.692 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.692 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.692 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.262 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.262 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:35.262 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.262 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.262 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.523 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.523 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:35.523 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.523 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.523 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.784 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.784 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:35.784 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.784 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.784 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.046 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.046 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:36.046 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.046 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.046 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.307 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:36.307 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.307 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3327480 00:13:36.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3327480) - No such process 00:13:36.307 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3327480 00:13:36.307 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:36.307 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:36.307 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:36.307 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:36.307 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:36.307 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:36.307 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:36.307 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:36.307 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:36.307 rmmod nvme_tcp 00:13:36.569 rmmod nvme_fabrics 00:13:36.569 rmmod nvme_keyring 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3327168 ']' 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3327168 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 3327168 ']' 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 3327168 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3327168 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3327168' 00:13:36.569 killing process with pid 3327168 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 3327168 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 3327168 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.569 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.117 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:39.117 00:13:39.117 real 0m21.365s 00:13:39.117 user 0m41.993s 00:13:39.117 sys 0m9.502s 00:13:39.117 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:39.117 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.117 ************************************ 00:13:39.117 END TEST nvmf_connect_stress 00:13:39.117 ************************************ 00:13:39.117 07:27:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:39.117 07:27:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:39.117 07:27:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:39.117 07:27:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:39.117 ************************************ 00:13:39.117 START TEST nvmf_fused_ordering 00:13:39.117 ************************************ 00:13:39.117 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:39.117 * Looking for test storage... 00:13:39.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.117 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:39.117 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:39.117 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:39.117 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:39.117 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:39.117 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:39.117 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:39.117 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.117 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:39.117 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:39.117 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:39.117 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:39.117 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:39.117 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:39.117 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:39.117 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:39.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.118 --rc genhtml_branch_coverage=1 00:13:39.118 --rc genhtml_function_coverage=1 00:13:39.118 --rc genhtml_legend=1 00:13:39.118 --rc geninfo_all_blocks=1 00:13:39.118 --rc geninfo_unexecuted_blocks=1 00:13:39.118 00:13:39.118 ' 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:39.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.118 --rc genhtml_branch_coverage=1 00:13:39.118 --rc genhtml_function_coverage=1 00:13:39.118 --rc genhtml_legend=1 00:13:39.118 --rc geninfo_all_blocks=1 00:13:39.118 --rc geninfo_unexecuted_blocks=1 00:13:39.118 00:13:39.118 ' 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:39.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.118 --rc genhtml_branch_coverage=1 00:13:39.118 --rc genhtml_function_coverage=1 00:13:39.118 --rc genhtml_legend=1 00:13:39.118 --rc geninfo_all_blocks=1 00:13:39.118 --rc geninfo_unexecuted_blocks=1 00:13:39.118 00:13:39.118 ' 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:39.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.118 --rc genhtml_branch_coverage=1 00:13:39.118 --rc genhtml_function_coverage=1 00:13:39.118 --rc genhtml_legend=1 00:13:39.118 --rc geninfo_all_blocks=1 00:13:39.118 --rc geninfo_unexecuted_blocks=1 00:13:39.118 00:13:39.118 ' 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:39.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:39.118 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:39.119 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:39.119 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:47.473 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:47.473 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:47.473 Found net devices under 0000:31:00.0: cvl_0_0 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:47.473 Found net devices under 0000:31:00.1: cvl_0_1 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.473 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:47.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:13:47.474 00:13:47.474 --- 10.0.0.2 ping statistics --- 00:13:47.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.474 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:13:47.474 00:13:47.474 --- 10.0.0.1 ping statistics --- 00:13:47.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.474 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3333606 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3333606 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 3333606 ']' 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:47.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.474 [2024-11-20 07:28:04.786265] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:13:47.474 [2024-11-20 07:28:04.786331] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.474 [2024-11-20 07:28:04.885042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.474 [2024-11-20 07:28:04.935204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.474 [2024-11-20 07:28:04.935259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.474 [2024-11-20 07:28:04.935268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.474 [2024-11-20 07:28:04.935275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.474 [2024-11-20 07:28:04.935280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.474 [2024-11-20 07:28:04.936114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.474 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:47.474 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:13:47.474 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:47.474 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:47.474 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.474 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.474 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.474 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.474 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.474 [2024-11-20 07:28:05.651367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.474 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.474 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:47.474 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.474 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.474 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.474 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.474 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.474 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.474 [2024-11-20 07:28:05.675630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.734 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.734 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:47.734 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.734 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.734 NULL1 00:13:47.734 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.734 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:47.734 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.734 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.734 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.734 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:47.734 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.734 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.734 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.734 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:47.734 [2024-11-20 07:28:05.745060] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:13:47.734 [2024-11-20 07:28:05.745105] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3333937 ] 00:13:47.995 Attached to nqn.2016-06.io.spdk:cnode1 00:13:47.995 Namespace ID: 1 size: 1GB 00:13:47.995 fused_ordering(0) 00:13:47.995 fused_ordering(1) 00:13:47.995 fused_ordering(2) 00:13:47.995 fused_ordering(3) 00:13:47.995 fused_ordering(4) 00:13:47.995 fused_ordering(5) 00:13:47.995 fused_ordering(6) 00:13:47.995 fused_ordering(7) 00:13:47.995 fused_ordering(8) 00:13:47.995 fused_ordering(9) 00:13:47.995 fused_ordering(10) 00:13:47.995 fused_ordering(11) 00:13:47.995 fused_ordering(12) 00:13:47.995 fused_ordering(13) 00:13:47.995 fused_ordering(14) 00:13:47.995 fused_ordering(15) 00:13:47.995 fused_ordering(16) 00:13:47.995 fused_ordering(17) 00:13:47.995 fused_ordering(18) 00:13:47.995 fused_ordering(19) 00:13:47.995 fused_ordering(20) 00:13:47.995 fused_ordering(21) 00:13:47.995 fused_ordering(22) 00:13:47.995 fused_ordering(23) 00:13:47.995 fused_ordering(24) 00:13:47.995 fused_ordering(25) 00:13:47.995 fused_ordering(26) 00:13:47.995 fused_ordering(27) 00:13:47.995 fused_ordering(28) 00:13:47.995 fused_ordering(29) 00:13:47.995 fused_ordering(30) 00:13:47.995 fused_ordering(31) 00:13:47.995 fused_ordering(32) 00:13:47.995 fused_ordering(33) 00:13:47.995 fused_ordering(34) 00:13:47.995 fused_ordering(35) 00:13:47.995 fused_ordering(36) 00:13:47.995 fused_ordering(37) 00:13:47.995 fused_ordering(38) 00:13:47.995 fused_ordering(39) 00:13:47.995 fused_ordering(40) 00:13:47.995 fused_ordering(41) 00:13:47.995 fused_ordering(42) 00:13:47.995 fused_ordering(43) 00:13:47.995 fused_ordering(44) 00:13:47.995 fused_ordering(45) 00:13:47.995 fused_ordering(46) 00:13:47.995 fused_ordering(47) 00:13:47.995 fused_ordering(48) 00:13:47.995 fused_ordering(49) 00:13:47.995 fused_ordering(50) 00:13:47.995 fused_ordering(51) 00:13:47.995 fused_ordering(52) 00:13:47.995 fused_ordering(53) 00:13:47.995 fused_ordering(54) 00:13:47.995 fused_ordering(55) 00:13:47.995 fused_ordering(56) 00:13:47.995 fused_ordering(57) 00:13:47.995 fused_ordering(58) 00:13:47.995 fused_ordering(59) 00:13:47.995 fused_ordering(60) 00:13:47.995 fused_ordering(61) 00:13:47.995 fused_ordering(62) 00:13:47.995 fused_ordering(63) 00:13:47.995 fused_ordering(64) 00:13:47.995 fused_ordering(65) 00:13:47.995 fused_ordering(66) 00:13:47.995 fused_ordering(67) 00:13:47.995 fused_ordering(68) 00:13:47.995 fused_ordering(69) 00:13:47.995 fused_ordering(70) 00:13:47.995 fused_ordering(71) 00:13:47.995 fused_ordering(72) 00:13:47.995 fused_ordering(73) 00:13:47.995 fused_ordering(74) 00:13:47.995 fused_ordering(75) 00:13:47.995 fused_ordering(76) 00:13:47.995 fused_ordering(77) 00:13:47.995 fused_ordering(78) 00:13:47.995 fused_ordering(79) 00:13:47.995 fused_ordering(80) 00:13:47.995 fused_ordering(81) 00:13:47.995 fused_ordering(82) 00:13:47.995 fused_ordering(83) 00:13:47.995 fused_ordering(84) 00:13:47.995 fused_ordering(85) 00:13:47.995 fused_ordering(86) 00:13:47.995 fused_ordering(87) 00:13:47.995 fused_ordering(88) 00:13:47.995 fused_ordering(89) 00:13:47.995 fused_ordering(90) 00:13:47.995 fused_ordering(91) 00:13:47.995 fused_ordering(92) 00:13:47.995 fused_ordering(93) 00:13:47.995 fused_ordering(94) 00:13:47.995 fused_ordering(95) 00:13:47.995 fused_ordering(96) 00:13:47.995 fused_ordering(97) 00:13:47.995 fused_ordering(98) 00:13:47.995 fused_ordering(99) 00:13:47.995 fused_ordering(100) 00:13:47.995 fused_ordering(101) 00:13:47.995 fused_ordering(102) 00:13:47.995 fused_ordering(103) 00:13:47.995 fused_ordering(104) 00:13:47.995 fused_ordering(105) 00:13:47.995 fused_ordering(106) 00:13:47.995 fused_ordering(107) 00:13:47.995 fused_ordering(108) 00:13:47.995 fused_ordering(109) 00:13:47.995 fused_ordering(110) 00:13:47.995 fused_ordering(111) 00:13:47.995 fused_ordering(112) 00:13:47.995 fused_ordering(113) 00:13:47.995 fused_ordering(114) 00:13:47.995 fused_ordering(115) 00:13:47.995 fused_ordering(116) 00:13:47.995 fused_ordering(117) 00:13:47.995 fused_ordering(118) 00:13:47.995 fused_ordering(119) 00:13:47.995 fused_ordering(120) 00:13:47.995 fused_ordering(121) 00:13:47.995 fused_ordering(122) 00:13:47.995 fused_ordering(123) 00:13:47.995 fused_ordering(124) 00:13:47.995 fused_ordering(125) 00:13:47.995 fused_ordering(126) 00:13:47.995 fused_ordering(127) 00:13:47.995 fused_ordering(128) 00:13:47.995 fused_ordering(129) 00:13:47.995 fused_ordering(130) 00:13:47.995 fused_ordering(131) 00:13:47.995 fused_ordering(132) 00:13:47.995 fused_ordering(133) 00:13:47.995 fused_ordering(134) 00:13:47.995 fused_ordering(135) 00:13:47.995 fused_ordering(136) 00:13:47.995 fused_ordering(137) 00:13:47.995 fused_ordering(138) 00:13:47.995 fused_ordering(139) 00:13:47.995 fused_ordering(140) 00:13:47.995 fused_ordering(141) 00:13:47.995 fused_ordering(142) 00:13:47.995 fused_ordering(143) 00:13:47.995 fused_ordering(144) 00:13:47.995 fused_ordering(145) 00:13:47.995 fused_ordering(146) 00:13:47.996 fused_ordering(147) 00:13:47.996 fused_ordering(148) 00:13:47.996 fused_ordering(149) 00:13:47.996 fused_ordering(150) 00:13:47.996 fused_ordering(151) 00:13:47.996 fused_ordering(152) 00:13:47.996 fused_ordering(153) 00:13:47.996 fused_ordering(154) 00:13:47.996 fused_ordering(155) 00:13:47.996 fused_ordering(156) 00:13:47.996 fused_ordering(157) 00:13:47.996 fused_ordering(158) 00:13:47.996 fused_ordering(159) 00:13:47.996 fused_ordering(160) 00:13:47.996 fused_ordering(161) 00:13:47.996 fused_ordering(162) 00:13:47.996 fused_ordering(163) 00:13:47.996 fused_ordering(164) 00:13:47.996 fused_ordering(165) 00:13:47.996 fused_ordering(166) 00:13:47.996 fused_ordering(167) 00:13:47.996 fused_ordering(168) 00:13:47.996 fused_ordering(169) 00:13:47.996 fused_ordering(170) 00:13:47.996 fused_ordering(171) 00:13:47.996 fused_ordering(172) 00:13:47.996 fused_ordering(173) 00:13:47.996 fused_ordering(174) 00:13:47.996 fused_ordering(175) 00:13:47.996 fused_ordering(176) 00:13:47.996 fused_ordering(177) 00:13:47.996 fused_ordering(178) 00:13:47.996 fused_ordering(179) 00:13:47.996 fused_ordering(180) 00:13:47.996 fused_ordering(181) 00:13:47.996 fused_ordering(182) 00:13:47.996 fused_ordering(183) 00:13:47.996 fused_ordering(184) 00:13:47.996 fused_ordering(185) 00:13:47.996 fused_ordering(186) 00:13:47.996 fused_ordering(187) 00:13:47.996 fused_ordering(188) 00:13:47.996 fused_ordering(189) 00:13:47.996 fused_ordering(190) 00:13:47.996 fused_ordering(191) 00:13:47.996 fused_ordering(192) 00:13:47.996 fused_ordering(193) 00:13:47.996 fused_ordering(194) 00:13:47.996 fused_ordering(195) 00:13:47.996 fused_ordering(196) 00:13:47.996 fused_ordering(197) 00:13:47.996 fused_ordering(198) 00:13:47.996 fused_ordering(199) 00:13:47.996 fused_ordering(200) 00:13:47.996 fused_ordering(201) 00:13:47.996 fused_ordering(202) 00:13:47.996 fused_ordering(203) 00:13:47.996 fused_ordering(204) 00:13:47.996 fused_ordering(205) 00:13:48.565 fused_ordering(206) 00:13:48.565 fused_ordering(207) 00:13:48.565 fused_ordering(208) 00:13:48.565 fused_ordering(209) 00:13:48.565 fused_ordering(210) 00:13:48.565 fused_ordering(211) 00:13:48.565 fused_ordering(212) 00:13:48.565 fused_ordering(213) 00:13:48.565 fused_ordering(214) 00:13:48.565 fused_ordering(215) 00:13:48.565 fused_ordering(216) 00:13:48.565 fused_ordering(217) 00:13:48.565 fused_ordering(218) 00:13:48.565 fused_ordering(219) 00:13:48.565 fused_ordering(220) 00:13:48.565 fused_ordering(221) 00:13:48.565 fused_ordering(222) 00:13:48.565 fused_ordering(223) 00:13:48.565 fused_ordering(224) 00:13:48.565 fused_ordering(225) 00:13:48.565 fused_ordering(226) 00:13:48.565 fused_ordering(227) 00:13:48.565 fused_ordering(228) 00:13:48.565 fused_ordering(229) 00:13:48.565 fused_ordering(230) 00:13:48.565 fused_ordering(231) 00:13:48.565 fused_ordering(232) 00:13:48.565 fused_ordering(233) 00:13:48.565 fused_ordering(234) 00:13:48.565 fused_ordering(235) 00:13:48.565 fused_ordering(236) 00:13:48.565 fused_ordering(237) 00:13:48.565 fused_ordering(238) 00:13:48.565 fused_ordering(239) 00:13:48.565 fused_ordering(240) 00:13:48.565 fused_ordering(241) 00:13:48.565 fused_ordering(242) 00:13:48.565 fused_ordering(243) 00:13:48.565 fused_ordering(244) 00:13:48.565 fused_ordering(245) 00:13:48.565 fused_ordering(246) 00:13:48.565 fused_ordering(247) 00:13:48.565 fused_ordering(248) 00:13:48.565 fused_ordering(249) 00:13:48.565 fused_ordering(250) 00:13:48.565 fused_ordering(251) 00:13:48.565 fused_ordering(252) 00:13:48.565 fused_ordering(253) 00:13:48.565 fused_ordering(254) 00:13:48.565 fused_ordering(255) 00:13:48.565 fused_ordering(256) 00:13:48.565 fused_ordering(257) 00:13:48.565 fused_ordering(258) 00:13:48.565 fused_ordering(259) 00:13:48.565 fused_ordering(260) 00:13:48.565 fused_ordering(261) 00:13:48.565 fused_ordering(262) 00:13:48.565 fused_ordering(263) 00:13:48.565 fused_ordering(264) 00:13:48.565 fused_ordering(265) 00:13:48.565 fused_ordering(266) 00:13:48.565 fused_ordering(267) 00:13:48.565 fused_ordering(268) 00:13:48.565 fused_ordering(269) 00:13:48.565 fused_ordering(270) 00:13:48.565 fused_ordering(271) 00:13:48.565 fused_ordering(272) 00:13:48.565 fused_ordering(273) 00:13:48.565 fused_ordering(274) 00:13:48.565 fused_ordering(275) 00:13:48.565 fused_ordering(276) 00:13:48.565 fused_ordering(277) 00:13:48.565 fused_ordering(278) 00:13:48.565 fused_ordering(279) 00:13:48.565 fused_ordering(280) 00:13:48.565 fused_ordering(281) 00:13:48.565 fused_ordering(282) 00:13:48.565 fused_ordering(283) 00:13:48.565 fused_ordering(284) 00:13:48.565 fused_ordering(285) 00:13:48.565 fused_ordering(286) 00:13:48.565 fused_ordering(287) 00:13:48.565 fused_ordering(288) 00:13:48.565 fused_ordering(289) 00:13:48.565 fused_ordering(290) 00:13:48.565 fused_ordering(291) 00:13:48.565 fused_ordering(292) 00:13:48.565 fused_ordering(293) 00:13:48.565 fused_ordering(294) 00:13:48.565 fused_ordering(295) 00:13:48.565 fused_ordering(296) 00:13:48.565 fused_ordering(297) 00:13:48.565 fused_ordering(298) 00:13:48.565 fused_ordering(299) 00:13:48.565 fused_ordering(300) 00:13:48.565 fused_ordering(301) 00:13:48.565 fused_ordering(302) 00:13:48.565 fused_ordering(303) 00:13:48.565 fused_ordering(304) 00:13:48.565 fused_ordering(305) 00:13:48.565 fused_ordering(306) 00:13:48.565 fused_ordering(307) 00:13:48.565 fused_ordering(308) 00:13:48.565 fused_ordering(309) 00:13:48.565 fused_ordering(310) 00:13:48.565 fused_ordering(311) 00:13:48.565 fused_ordering(312) 00:13:48.565 fused_ordering(313) 00:13:48.565 fused_ordering(314) 00:13:48.565 fused_ordering(315) 00:13:48.565 fused_ordering(316) 00:13:48.565 fused_ordering(317) 00:13:48.565 fused_ordering(318) 00:13:48.565 fused_ordering(319) 00:13:48.565 fused_ordering(320) 00:13:48.565 fused_ordering(321) 00:13:48.565 fused_ordering(322) 00:13:48.565 fused_ordering(323) 00:13:48.565 fused_ordering(324) 00:13:48.565 fused_ordering(325) 00:13:48.565 fused_ordering(326) 00:13:48.565 fused_ordering(327) 00:13:48.565 fused_ordering(328) 00:13:48.565 fused_ordering(329) 00:13:48.565 fused_ordering(330) 00:13:48.565 fused_ordering(331) 00:13:48.565 fused_ordering(332) 00:13:48.565 fused_ordering(333) 00:13:48.565 fused_ordering(334) 00:13:48.565 fused_ordering(335) 00:13:48.565 fused_ordering(336) 00:13:48.565 fused_ordering(337) 00:13:48.565 fused_ordering(338) 00:13:48.565 fused_ordering(339) 00:13:48.565 fused_ordering(340) 00:13:48.565 fused_ordering(341) 00:13:48.565 fused_ordering(342) 00:13:48.565 fused_ordering(343) 00:13:48.565 fused_ordering(344) 00:13:48.565 fused_ordering(345) 00:13:48.565 fused_ordering(346) 00:13:48.565 fused_ordering(347) 00:13:48.565 fused_ordering(348) 00:13:48.565 fused_ordering(349) 00:13:48.565 fused_ordering(350) 00:13:48.565 fused_ordering(351) 00:13:48.565 fused_ordering(352) 00:13:48.565 fused_ordering(353) 00:13:48.565 fused_ordering(354) 00:13:48.565 fused_ordering(355) 00:13:48.565 fused_ordering(356) 00:13:48.565 fused_ordering(357) 00:13:48.565 fused_ordering(358) 00:13:48.565 fused_ordering(359) 00:13:48.565 fused_ordering(360) 00:13:48.565 fused_ordering(361) 00:13:48.565 fused_ordering(362) 00:13:48.565 fused_ordering(363) 00:13:48.565 fused_ordering(364) 00:13:48.565 fused_ordering(365) 00:13:48.565 fused_ordering(366) 00:13:48.565 fused_ordering(367) 00:13:48.565 fused_ordering(368) 00:13:48.565 fused_ordering(369) 00:13:48.565 fused_ordering(370) 00:13:48.565 fused_ordering(371) 00:13:48.565 fused_ordering(372) 00:13:48.565 fused_ordering(373) 00:13:48.565 fused_ordering(374) 00:13:48.565 fused_ordering(375) 00:13:48.565 fused_ordering(376) 00:13:48.565 fused_ordering(377) 00:13:48.565 fused_ordering(378) 00:13:48.565 fused_ordering(379) 00:13:48.565 fused_ordering(380) 00:13:48.565 fused_ordering(381) 00:13:48.565 fused_ordering(382) 00:13:48.565 fused_ordering(383) 00:13:48.565 fused_ordering(384) 00:13:48.565 fused_ordering(385) 00:13:48.565 fused_ordering(386) 00:13:48.565 fused_ordering(387) 00:13:48.565 fused_ordering(388) 00:13:48.565 fused_ordering(389) 00:13:48.565 fused_ordering(390) 00:13:48.565 fused_ordering(391) 00:13:48.565 fused_ordering(392) 00:13:48.565 fused_ordering(393) 00:13:48.565 fused_ordering(394) 00:13:48.565 fused_ordering(395) 00:13:48.565 fused_ordering(396) 00:13:48.565 fused_ordering(397) 00:13:48.565 fused_ordering(398) 00:13:48.565 fused_ordering(399) 00:13:48.565 fused_ordering(400) 00:13:48.565 fused_ordering(401) 00:13:48.565 fused_ordering(402) 00:13:48.565 fused_ordering(403) 00:13:48.565 fused_ordering(404) 00:13:48.565 fused_ordering(405) 00:13:48.565 fused_ordering(406) 00:13:48.565 fused_ordering(407) 00:13:48.565 fused_ordering(408) 00:13:48.565 fused_ordering(409) 00:13:48.565 fused_ordering(410) 00:13:48.825 fused_ordering(411) 00:13:48.825 fused_ordering(412) 00:13:48.825 fused_ordering(413) 00:13:48.825 fused_ordering(414) 00:13:48.825 fused_ordering(415) 00:13:48.825 fused_ordering(416) 00:13:48.825 fused_ordering(417) 00:13:48.825 fused_ordering(418) 00:13:48.825 fused_ordering(419) 00:13:48.825 fused_ordering(420) 00:13:48.825 fused_ordering(421) 00:13:48.825 fused_ordering(422) 00:13:48.825 fused_ordering(423) 00:13:48.825 fused_ordering(424) 00:13:48.825 fused_ordering(425) 00:13:48.825 fused_ordering(426) 00:13:48.825 fused_ordering(427) 00:13:48.825 fused_ordering(428) 00:13:48.825 fused_ordering(429) 00:13:48.825 fused_ordering(430) 00:13:48.825 fused_ordering(431) 00:13:48.825 fused_ordering(432) 00:13:48.825 fused_ordering(433) 00:13:48.825 fused_ordering(434) 00:13:48.825 fused_ordering(435) 00:13:48.825 fused_ordering(436) 00:13:48.825 fused_ordering(437) 00:13:48.825 fused_ordering(438) 00:13:48.825 fused_ordering(439) 00:13:48.825 fused_ordering(440) 00:13:48.825 fused_ordering(441) 00:13:48.825 fused_ordering(442) 00:13:48.825 fused_ordering(443) 00:13:48.825 fused_ordering(444) 00:13:48.825 fused_ordering(445) 00:13:48.825 fused_ordering(446) 00:13:48.825 fused_ordering(447) 00:13:48.825 fused_ordering(448) 00:13:48.825 fused_ordering(449) 00:13:48.825 fused_ordering(450) 00:13:48.825 fused_ordering(451) 00:13:48.825 fused_ordering(452) 00:13:48.825 fused_ordering(453) 00:13:48.825 fused_ordering(454) 00:13:48.825 fused_ordering(455) 00:13:48.825 fused_ordering(456) 00:13:48.825 fused_ordering(457) 00:13:48.825 fused_ordering(458) 00:13:48.825 fused_ordering(459) 00:13:48.825 fused_ordering(460) 00:13:48.825 fused_ordering(461) 00:13:48.825 fused_ordering(462) 00:13:48.825 fused_ordering(463) 00:13:48.825 fused_ordering(464) 00:13:48.825 fused_ordering(465) 00:13:48.825 fused_ordering(466) 00:13:48.825 fused_ordering(467) 00:13:48.825 fused_ordering(468) 00:13:48.825 fused_ordering(469) 00:13:48.825 fused_ordering(470) 00:13:48.825 fused_ordering(471) 00:13:48.825 fused_ordering(472) 00:13:48.825 fused_ordering(473) 00:13:48.825 fused_ordering(474) 00:13:48.825 fused_ordering(475) 00:13:48.825 fused_ordering(476) 00:13:48.825 fused_ordering(477) 00:13:48.825 fused_ordering(478) 00:13:48.825 fused_ordering(479) 00:13:48.825 fused_ordering(480) 00:13:48.825 fused_ordering(481) 00:13:48.825 fused_ordering(482) 00:13:48.825 fused_ordering(483) 00:13:48.825 fused_ordering(484) 00:13:48.825 fused_ordering(485) 00:13:48.825 fused_ordering(486) 00:13:48.825 fused_ordering(487) 00:13:48.825 fused_ordering(488) 00:13:48.825 fused_ordering(489) 00:13:48.825 fused_ordering(490) 00:13:48.825 fused_ordering(491) 00:13:48.825 fused_ordering(492) 00:13:48.825 fused_ordering(493) 00:13:48.825 fused_ordering(494) 00:13:48.825 fused_ordering(495) 00:13:48.825 fused_ordering(496) 00:13:48.825 fused_ordering(497) 00:13:48.825 fused_ordering(498) 00:13:48.825 fused_ordering(499) 00:13:48.825 fused_ordering(500) 00:13:48.825 fused_ordering(501) 00:13:48.825 fused_ordering(502) 00:13:48.825 fused_ordering(503) 00:13:48.825 fused_ordering(504) 00:13:48.825 fused_ordering(505) 00:13:48.825 fused_ordering(506) 00:13:48.825 fused_ordering(507) 00:13:48.825 fused_ordering(508) 00:13:48.825 fused_ordering(509) 00:13:48.825 fused_ordering(510) 00:13:48.825 fused_ordering(511) 00:13:48.825 fused_ordering(512) 00:13:48.825 fused_ordering(513) 00:13:48.825 fused_ordering(514) 00:13:48.825 fused_ordering(515) 00:13:48.825 fused_ordering(516) 00:13:48.825 fused_ordering(517) 00:13:48.825 fused_ordering(518) 00:13:48.825 fused_ordering(519) 00:13:48.825 fused_ordering(520) 00:13:48.825 fused_ordering(521) 00:13:48.825 fused_ordering(522) 00:13:48.825 fused_ordering(523) 00:13:48.825 fused_ordering(524) 00:13:48.825 fused_ordering(525) 00:13:48.825 fused_ordering(526) 00:13:48.825 fused_ordering(527) 00:13:48.825 fused_ordering(528) 00:13:48.825 fused_ordering(529) 00:13:48.825 fused_ordering(530) 00:13:48.825 fused_ordering(531) 00:13:48.825 fused_ordering(532) 00:13:48.825 fused_ordering(533) 00:13:48.825 fused_ordering(534) 00:13:48.825 fused_ordering(535) 00:13:48.825 fused_ordering(536) 00:13:48.825 fused_ordering(537) 00:13:48.825 fused_ordering(538) 00:13:48.825 fused_ordering(539) 00:13:48.825 fused_ordering(540) 00:13:48.825 fused_ordering(541) 00:13:48.825 fused_ordering(542) 00:13:48.825 fused_ordering(543) 00:13:48.825 fused_ordering(544) 00:13:48.825 fused_ordering(545) 00:13:48.825 fused_ordering(546) 00:13:48.825 fused_ordering(547) 00:13:48.825 fused_ordering(548) 00:13:48.825 fused_ordering(549) 00:13:48.825 fused_ordering(550) 00:13:48.825 fused_ordering(551) 00:13:48.825 fused_ordering(552) 00:13:48.825 fused_ordering(553) 00:13:48.825 fused_ordering(554) 00:13:48.825 fused_ordering(555) 00:13:48.825 fused_ordering(556) 00:13:48.825 fused_ordering(557) 00:13:48.825 fused_ordering(558) 00:13:48.825 fused_ordering(559) 00:13:48.825 fused_ordering(560) 00:13:48.825 fused_ordering(561) 00:13:48.825 fused_ordering(562) 00:13:48.825 fused_ordering(563) 00:13:48.825 fused_ordering(564) 00:13:48.825 fused_ordering(565) 00:13:48.825 fused_ordering(566) 00:13:48.825 fused_ordering(567) 00:13:48.825 fused_ordering(568) 00:13:48.825 fused_ordering(569) 00:13:48.825 fused_ordering(570) 00:13:48.825 fused_ordering(571) 00:13:48.825 fused_ordering(572) 00:13:48.825 fused_ordering(573) 00:13:48.825 fused_ordering(574) 00:13:48.825 fused_ordering(575) 00:13:48.825 fused_ordering(576) 00:13:48.825 fused_ordering(577) 00:13:48.825 fused_ordering(578) 00:13:48.825 fused_ordering(579) 00:13:48.825 fused_ordering(580) 00:13:48.825 fused_ordering(581) 00:13:48.825 fused_ordering(582) 00:13:48.825 fused_ordering(583) 00:13:48.825 fused_ordering(584) 00:13:48.825 fused_ordering(585) 00:13:48.825 fused_ordering(586) 00:13:48.825 fused_ordering(587) 00:13:48.825 fused_ordering(588) 00:13:48.825 fused_ordering(589) 00:13:48.825 fused_ordering(590) 00:13:48.825 fused_ordering(591) 00:13:48.825 fused_ordering(592) 00:13:48.825 fused_ordering(593) 00:13:48.825 fused_ordering(594) 00:13:48.825 fused_ordering(595) 00:13:48.825 fused_ordering(596) 00:13:48.825 fused_ordering(597) 00:13:48.825 fused_ordering(598) 00:13:48.825 fused_ordering(599) 00:13:48.825 fused_ordering(600) 00:13:48.825 fused_ordering(601) 00:13:48.825 fused_ordering(602) 00:13:48.825 fused_ordering(603) 00:13:48.825 fused_ordering(604) 00:13:48.825 fused_ordering(605) 00:13:48.825 fused_ordering(606) 00:13:48.825 fused_ordering(607) 00:13:48.825 fused_ordering(608) 00:13:48.825 fused_ordering(609) 00:13:48.825 fused_ordering(610) 00:13:48.825 fused_ordering(611) 00:13:48.825 fused_ordering(612) 00:13:48.825 fused_ordering(613) 00:13:48.825 fused_ordering(614) 00:13:48.825 fused_ordering(615) 00:13:49.396 fused_ordering(616) 00:13:49.396 fused_ordering(617) 00:13:49.396 fused_ordering(618) 00:13:49.396 fused_ordering(619) 00:13:49.396 fused_ordering(620) 00:13:49.396 fused_ordering(621) 00:13:49.396 fused_ordering(622) 00:13:49.396 fused_ordering(623) 00:13:49.396 fused_ordering(624) 00:13:49.396 fused_ordering(625) 00:13:49.396 fused_ordering(626) 00:13:49.396 fused_ordering(627) 00:13:49.396 fused_ordering(628) 00:13:49.396 fused_ordering(629) 00:13:49.396 fused_ordering(630) 00:13:49.396 fused_ordering(631) 00:13:49.396 fused_ordering(632) 00:13:49.396 fused_ordering(633) 00:13:49.396 fused_ordering(634) 00:13:49.396 fused_ordering(635) 00:13:49.396 fused_ordering(636) 00:13:49.396 fused_ordering(637) 00:13:49.396 fused_ordering(638) 00:13:49.396 fused_ordering(639) 00:13:49.396 fused_ordering(640) 00:13:49.396 fused_ordering(641) 00:13:49.396 fused_ordering(642) 00:13:49.396 fused_ordering(643) 00:13:49.396 fused_ordering(644) 00:13:49.396 fused_ordering(645) 00:13:49.396 fused_ordering(646) 00:13:49.396 fused_ordering(647) 00:13:49.396 fused_ordering(648) 00:13:49.396 fused_ordering(649) 00:13:49.396 fused_ordering(650) 00:13:49.396 fused_ordering(651) 00:13:49.396 fused_ordering(652) 00:13:49.396 fused_ordering(653) 00:13:49.396 fused_ordering(654) 00:13:49.396 fused_ordering(655) 00:13:49.396 fused_ordering(656) 00:13:49.396 fused_ordering(657) 00:13:49.396 fused_ordering(658) 00:13:49.396 fused_ordering(659) 00:13:49.396 fused_ordering(660) 00:13:49.396 fused_ordering(661) 00:13:49.396 fused_ordering(662) 00:13:49.396 fused_ordering(663) 00:13:49.396 fused_ordering(664) 00:13:49.396 fused_ordering(665) 00:13:49.396 fused_ordering(666) 00:13:49.396 fused_ordering(667) 00:13:49.396 fused_ordering(668) 00:13:49.396 fused_ordering(669) 00:13:49.396 fused_ordering(670) 00:13:49.396 fused_ordering(671) 00:13:49.396 fused_ordering(672) 00:13:49.396 fused_ordering(673) 00:13:49.396 fused_ordering(674) 00:13:49.396 fused_ordering(675) 00:13:49.396 fused_ordering(676) 00:13:49.396 fused_ordering(677) 00:13:49.396 fused_ordering(678) 00:13:49.396 fused_ordering(679) 00:13:49.396 fused_ordering(680) 00:13:49.396 fused_ordering(681) 00:13:49.396 fused_ordering(682) 00:13:49.396 fused_ordering(683) 00:13:49.396 fused_ordering(684) 00:13:49.396 fused_ordering(685) 00:13:49.396 fused_ordering(686) 00:13:49.396 fused_ordering(687) 00:13:49.396 fused_ordering(688) 00:13:49.396 fused_ordering(689) 00:13:49.396 fused_ordering(690) 00:13:49.396 fused_ordering(691) 00:13:49.396 fused_ordering(692) 00:13:49.396 fused_ordering(693) 00:13:49.396 fused_ordering(694) 00:13:49.396 fused_ordering(695) 00:13:49.396 fused_ordering(696) 00:13:49.396 fused_ordering(697) 00:13:49.396 fused_ordering(698) 00:13:49.396 fused_ordering(699) 00:13:49.396 fused_ordering(700) 00:13:49.396 fused_ordering(701) 00:13:49.396 fused_ordering(702) 00:13:49.396 fused_ordering(703) 00:13:49.396 fused_ordering(704) 00:13:49.396 fused_ordering(705) 00:13:49.396 fused_ordering(706) 00:13:49.396 fused_ordering(707) 00:13:49.396 fused_ordering(708) 00:13:49.396 fused_ordering(709) 00:13:49.396 fused_ordering(710) 00:13:49.396 fused_ordering(711) 00:13:49.396 fused_ordering(712) 00:13:49.396 fused_ordering(713) 00:13:49.396 fused_ordering(714) 00:13:49.396 fused_ordering(715) 00:13:49.396 fused_ordering(716) 00:13:49.396 fused_ordering(717) 00:13:49.396 fused_ordering(718) 00:13:49.397 fused_ordering(719) 00:13:49.397 fused_ordering(720) 00:13:49.397 fused_ordering(721) 00:13:49.397 fused_ordering(722) 00:13:49.397 fused_ordering(723) 00:13:49.397 fused_ordering(724) 00:13:49.397 fused_ordering(725) 00:13:49.397 fused_ordering(726) 00:13:49.397 fused_ordering(727) 00:13:49.397 fused_ordering(728) 00:13:49.397 fused_ordering(729) 00:13:49.397 fused_ordering(730) 00:13:49.397 fused_ordering(731) 00:13:49.397 fused_ordering(732) 00:13:49.397 fused_ordering(733) 00:13:49.397 fused_ordering(734) 00:13:49.397 fused_ordering(735) 00:13:49.397 fused_ordering(736) 00:13:49.397 fused_ordering(737) 00:13:49.397 fused_ordering(738) 00:13:49.397 fused_ordering(739) 00:13:49.397 fused_ordering(740) 00:13:49.397 fused_ordering(741) 00:13:49.397 fused_ordering(742) 00:13:49.397 fused_ordering(743) 00:13:49.397 fused_ordering(744) 00:13:49.397 fused_ordering(745) 00:13:49.397 fused_ordering(746) 00:13:49.397 fused_ordering(747) 00:13:49.397 fused_ordering(748) 00:13:49.397 fused_ordering(749) 00:13:49.397 fused_ordering(750) 00:13:49.397 fused_ordering(751) 00:13:49.397 fused_ordering(752) 00:13:49.397 fused_ordering(753) 00:13:49.397 fused_ordering(754) 00:13:49.397 fused_ordering(755) 00:13:49.397 fused_ordering(756) 00:13:49.397 fused_ordering(757) 00:13:49.397 fused_ordering(758) 00:13:49.397 fused_ordering(759) 00:13:49.397 fused_ordering(760) 00:13:49.397 fused_ordering(761) 00:13:49.397 fused_ordering(762) 00:13:49.397 fused_ordering(763) 00:13:49.397 fused_ordering(764) 00:13:49.397 fused_ordering(765) 00:13:49.397 fused_ordering(766) 00:13:49.397 fused_ordering(767) 00:13:49.397 fused_ordering(768) 00:13:49.397 fused_ordering(769) 00:13:49.397 fused_ordering(770) 00:13:49.397 fused_ordering(771) 00:13:49.397 fused_ordering(772) 00:13:49.397 fused_ordering(773) 00:13:49.397 fused_ordering(774) 00:13:49.397 fused_ordering(775) 00:13:49.397 fused_ordering(776) 00:13:49.397 fused_ordering(777) 00:13:49.397 fused_ordering(778) 00:13:49.397 fused_ordering(779) 00:13:49.397 fused_ordering(780) 00:13:49.397 fused_ordering(781) 00:13:49.397 fused_ordering(782) 00:13:49.397 fused_ordering(783) 00:13:49.397 fused_ordering(784) 00:13:49.397 fused_ordering(785) 00:13:49.397 fused_ordering(786) 00:13:49.397 fused_ordering(787) 00:13:49.397 fused_ordering(788) 00:13:49.397 fused_ordering(789) 00:13:49.397 fused_ordering(790) 00:13:49.397 fused_ordering(791) 00:13:49.397 fused_ordering(792) 00:13:49.397 fused_ordering(793) 00:13:49.397 fused_ordering(794) 00:13:49.397 fused_ordering(795) 00:13:49.397 fused_ordering(796) 00:13:49.397 fused_ordering(797) 00:13:49.397 fused_ordering(798) 00:13:49.397 fused_ordering(799) 00:13:49.397 fused_ordering(800) 00:13:49.397 fused_ordering(801) 00:13:49.397 fused_ordering(802) 00:13:49.397 fused_ordering(803) 00:13:49.397 fused_ordering(804) 00:13:49.397 fused_ordering(805) 00:13:49.397 fused_ordering(806) 00:13:49.397 fused_ordering(807) 00:13:49.397 fused_ordering(808) 00:13:49.397 fused_ordering(809) 00:13:49.397 fused_ordering(810) 00:13:49.397 fused_ordering(811) 00:13:49.397 fused_ordering(812) 00:13:49.397 fused_ordering(813) 00:13:49.397 fused_ordering(814) 00:13:49.397 fused_ordering(815) 00:13:49.397 fused_ordering(816) 00:13:49.397 fused_ordering(817) 00:13:49.397 fused_ordering(818) 00:13:49.397 fused_ordering(819) 00:13:49.397 fused_ordering(820) 00:13:49.968 fused_ordering(821) 00:13:49.968 fused_ordering(822) 00:13:49.968 fused_ordering(823) 00:13:49.968 fused_ordering(824) 00:13:49.968 fused_ordering(825) 00:13:49.968 fused_ordering(826) 00:13:49.968 fused_ordering(827) 00:13:49.968 fused_ordering(828) 00:13:49.968 fused_ordering(829) 00:13:49.968 fused_ordering(830) 00:13:49.968 fused_ordering(831) 00:13:49.968 fused_ordering(832) 00:13:49.968 fused_ordering(833) 00:13:49.968 fused_ordering(834) 00:13:49.968 fused_ordering(835) 00:13:49.968 fused_ordering(836) 00:13:49.968 fused_ordering(837) 00:13:49.968 fused_ordering(838) 00:13:49.968 fused_ordering(839) 00:13:49.968 fused_ordering(840) 00:13:49.968 fused_ordering(841) 00:13:49.968 fused_ordering(842) 00:13:49.968 fused_ordering(843) 00:13:49.968 fused_ordering(844) 00:13:49.968 fused_ordering(845) 00:13:49.968 fused_ordering(846) 00:13:49.968 fused_ordering(847) 00:13:49.968 fused_ordering(848) 00:13:49.968 fused_ordering(849) 00:13:49.968 fused_ordering(850) 00:13:49.968 fused_ordering(851) 00:13:49.968 fused_ordering(852) 00:13:49.968 fused_ordering(853) 00:13:49.968 fused_ordering(854) 00:13:49.968 fused_ordering(855) 00:13:49.968 fused_ordering(856) 00:13:49.968 fused_ordering(857) 00:13:49.968 fused_ordering(858) 00:13:49.968 fused_ordering(859) 00:13:49.968 fused_ordering(860) 00:13:49.968 fused_ordering(861) 00:13:49.968 fused_ordering(862) 00:13:49.968 fused_ordering(863) 00:13:49.968 fused_ordering(864) 00:13:49.968 fused_ordering(865) 00:13:49.968 fused_ordering(866) 00:13:49.968 fused_ordering(867) 00:13:49.968 fused_ordering(868) 00:13:49.968 fused_ordering(869) 00:13:49.968 fused_ordering(870) 00:13:49.968 fused_ordering(871) 00:13:49.968 fused_ordering(872) 00:13:49.968 fused_ordering(873) 00:13:49.968 fused_ordering(874) 00:13:49.968 fused_ordering(875) 00:13:49.968 fused_ordering(876) 00:13:49.968 fused_ordering(877) 00:13:49.968 fused_ordering(878) 00:13:49.968 fused_ordering(879) 00:13:49.968 fused_ordering(880) 00:13:49.968 fused_ordering(881) 00:13:49.968 fused_ordering(882) 00:13:49.968 fused_ordering(883) 00:13:49.968 fused_ordering(884) 00:13:49.968 fused_ordering(885) 00:13:49.968 fused_ordering(886) 00:13:49.968 fused_ordering(887) 00:13:49.968 fused_ordering(888) 00:13:49.968 fused_ordering(889) 00:13:49.968 fused_ordering(890) 00:13:49.968 fused_ordering(891) 00:13:49.968 fused_ordering(892) 00:13:49.968 fused_ordering(893) 00:13:49.968 fused_ordering(894) 00:13:49.968 fused_ordering(895) 00:13:49.968 fused_ordering(896) 00:13:49.968 fused_ordering(897) 00:13:49.968 fused_ordering(898) 00:13:49.968 fused_ordering(899) 00:13:49.968 fused_ordering(900) 00:13:49.968 fused_ordering(901) 00:13:49.968 fused_ordering(902) 00:13:49.968 fused_ordering(903) 00:13:49.968 fused_ordering(904) 00:13:49.968 fused_ordering(905) 00:13:49.968 fused_ordering(906) 00:13:49.968 fused_ordering(907) 00:13:49.968 fused_ordering(908) 00:13:49.968 fused_ordering(909) 00:13:49.968 fused_ordering(910) 00:13:49.968 fused_ordering(911) 00:13:49.968 fused_ordering(912) 00:13:49.968 fused_ordering(913) 00:13:49.968 fused_ordering(914) 00:13:49.968 fused_ordering(915) 00:13:49.968 fused_ordering(916) 00:13:49.968 fused_ordering(917) 00:13:49.968 fused_ordering(918) 00:13:49.968 fused_ordering(919) 00:13:49.968 fused_ordering(920) 00:13:49.968 fused_ordering(921) 00:13:49.968 fused_ordering(922) 00:13:49.968 fused_ordering(923) 00:13:49.968 fused_ordering(924) 00:13:49.968 fused_ordering(925) 00:13:49.968 fused_ordering(926) 00:13:49.968 fused_ordering(927) 00:13:49.968 fused_ordering(928) 00:13:49.968 fused_ordering(929) 00:13:49.968 fused_ordering(930) 00:13:49.968 fused_ordering(931) 00:13:49.968 fused_ordering(932) 00:13:49.968 fused_ordering(933) 00:13:49.968 fused_ordering(934) 00:13:49.968 fused_ordering(935) 00:13:49.968 fused_ordering(936) 00:13:49.968 fused_ordering(937) 00:13:49.968 fused_ordering(938) 00:13:49.968 fused_ordering(939) 00:13:49.968 fused_ordering(940) 00:13:49.968 fused_ordering(941) 00:13:49.968 fused_ordering(942) 00:13:49.968 fused_ordering(943) 00:13:49.968 fused_ordering(944) 00:13:49.968 fused_ordering(945) 00:13:49.968 fused_ordering(946) 00:13:49.968 fused_ordering(947) 00:13:49.968 fused_ordering(948) 00:13:49.968 fused_ordering(949) 00:13:49.968 fused_ordering(950) 00:13:49.968 fused_ordering(951) 00:13:49.968 fused_ordering(952) 00:13:49.968 fused_ordering(953) 00:13:49.968 fused_ordering(954) 00:13:49.968 fused_ordering(955) 00:13:49.968 fused_ordering(956) 00:13:49.968 fused_ordering(957) 00:13:49.968 fused_ordering(958) 00:13:49.968 fused_ordering(959) 00:13:49.968 fused_ordering(960) 00:13:49.968 fused_ordering(961) 00:13:49.968 fused_ordering(962) 00:13:49.968 fused_ordering(963) 00:13:49.968 fused_ordering(964) 00:13:49.968 fused_ordering(965) 00:13:49.968 fused_ordering(966) 00:13:49.968 fused_ordering(967) 00:13:49.968 fused_ordering(968) 00:13:49.968 fused_ordering(969) 00:13:49.968 fused_ordering(970) 00:13:49.968 fused_ordering(971) 00:13:49.968 fused_ordering(972) 00:13:49.968 fused_ordering(973) 00:13:49.968 fused_ordering(974) 00:13:49.968 fused_ordering(975) 00:13:49.968 fused_ordering(976) 00:13:49.968 fused_ordering(977) 00:13:49.968 fused_ordering(978) 00:13:49.968 fused_ordering(979) 00:13:49.968 fused_ordering(980) 00:13:49.968 fused_ordering(981) 00:13:49.968 fused_ordering(982) 00:13:49.968 fused_ordering(983) 00:13:49.968 fused_ordering(984) 00:13:49.968 fused_ordering(985) 00:13:49.968 fused_ordering(986) 00:13:49.968 fused_ordering(987) 00:13:49.968 fused_ordering(988) 00:13:49.968 fused_ordering(989) 00:13:49.968 fused_ordering(990) 00:13:49.968 fused_ordering(991) 00:13:49.968 fused_ordering(992) 00:13:49.968 fused_ordering(993) 00:13:49.968 fused_ordering(994) 00:13:49.969 fused_ordering(995) 00:13:49.969 fused_ordering(996) 00:13:49.969 fused_ordering(997) 00:13:49.969 fused_ordering(998) 00:13:49.969 fused_ordering(999) 00:13:49.969 fused_ordering(1000) 00:13:49.969 fused_ordering(1001) 00:13:49.969 fused_ordering(1002) 00:13:49.969 fused_ordering(1003) 00:13:49.969 fused_ordering(1004) 00:13:49.969 fused_ordering(1005) 00:13:49.969 fused_ordering(1006) 00:13:49.969 fused_ordering(1007) 00:13:49.969 fused_ordering(1008) 00:13:49.969 fused_ordering(1009) 00:13:49.969 fused_ordering(1010) 00:13:49.969 fused_ordering(1011) 00:13:49.969 fused_ordering(1012) 00:13:49.969 fused_ordering(1013) 00:13:49.969 fused_ordering(1014) 00:13:49.969 fused_ordering(1015) 00:13:49.969 fused_ordering(1016) 00:13:49.969 fused_ordering(1017) 00:13:49.969 fused_ordering(1018) 00:13:49.969 fused_ordering(1019) 00:13:49.969 fused_ordering(1020) 00:13:49.969 fused_ordering(1021) 00:13:49.969 fused_ordering(1022) 00:13:49.969 fused_ordering(1023) 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:49.969 rmmod nvme_tcp 00:13:49.969 rmmod nvme_fabrics 00:13:49.969 rmmod nvme_keyring 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3333606 ']' 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3333606 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 3333606 ']' 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 3333606 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:49.969 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3333606 00:13:50.230 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:50.230 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:50.230 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3333606' 00:13:50.230 killing process with pid 3333606 00:13:50.230 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 3333606 00:13:50.230 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 3333606 00:13:50.230 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:50.230 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:50.230 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:50.230 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:50.230 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:50.230 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:50.230 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:50.230 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:50.230 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:50.230 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.230 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.230 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:52.772 00:13:52.772 real 0m13.526s 00:13:52.772 user 0m7.109s 00:13:52.772 sys 0m7.225s 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:52.772 ************************************ 00:13:52.772 END TEST nvmf_fused_ordering 00:13:52.772 ************************************ 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:52.772 ************************************ 00:13:52.772 START TEST nvmf_ns_masking 00:13:52.772 ************************************ 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:52.772 * Looking for test storage... 00:13:52.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:52.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.772 --rc genhtml_branch_coverage=1 00:13:52.772 --rc genhtml_function_coverage=1 00:13:52.772 --rc genhtml_legend=1 00:13:52.772 --rc geninfo_all_blocks=1 00:13:52.772 --rc geninfo_unexecuted_blocks=1 00:13:52.772 00:13:52.772 ' 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:52.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.772 --rc genhtml_branch_coverage=1 00:13:52.772 --rc genhtml_function_coverage=1 00:13:52.772 --rc genhtml_legend=1 00:13:52.772 --rc geninfo_all_blocks=1 00:13:52.772 --rc geninfo_unexecuted_blocks=1 00:13:52.772 00:13:52.772 ' 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:52.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.772 --rc genhtml_branch_coverage=1 00:13:52.772 --rc genhtml_function_coverage=1 00:13:52.772 --rc genhtml_legend=1 00:13:52.772 --rc geninfo_all_blocks=1 00:13:52.772 --rc geninfo_unexecuted_blocks=1 00:13:52.772 00:13:52.772 ' 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:52.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.772 --rc genhtml_branch_coverage=1 00:13:52.772 --rc genhtml_function_coverage=1 00:13:52.772 --rc genhtml_legend=1 00:13:52.772 --rc geninfo_all_blocks=1 00:13:52.772 --rc geninfo_unexecuted_blocks=1 00:13:52.772 00:13:52.772 ' 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6328ddb1-e348-4f2a-91c4-a7f9c985e952 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=360a9d82-07c5-4c57-94ea-c08a21a18462 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=01b077e9-bae1-401d-a11f-5d5ff6904c63 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:52.772 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:00.919 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:00.919 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:00.919 Found net devices under 0000:31:00.0: cvl_0_0 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:00.919 Found net devices under 0000:31:00.1: cvl_0_1 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:00.919 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:00.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:14:00.919 00:14:00.919 --- 10.0.0.2 ping statistics --- 00:14:00.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.920 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:14:00.920 00:14:00.920 --- 10.0.0.1 ping statistics --- 00:14:00.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.920 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3338656 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3338656 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3338656 ']' 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:00.920 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.920 [2024-11-20 07:28:18.527380] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:14:00.920 [2024-11-20 07:28:18.527448] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.920 [2024-11-20 07:28:18.627443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.920 [2024-11-20 07:28:18.678087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.920 [2024-11-20 07:28:18.678142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.920 [2024-11-20 07:28:18.678151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.920 [2024-11-20 07:28:18.678158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.920 [2024-11-20 07:28:18.678164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.920 [2024-11-20 07:28:18.678988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.180 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:01.180 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:01.180 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:01.180 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:01.180 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:01.441 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.441 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:01.441 [2024-11-20 07:28:19.555211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.441 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:01.441 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:01.441 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:01.702 Malloc1 00:14:01.702 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:01.964 Malloc2 00:14:01.964 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:02.226 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:02.226 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.487 [2024-11-20 07:28:20.589648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.487 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:02.487 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 01b077e9-bae1-401d-a11f-5d5ff6904c63 -a 10.0.0.2 -s 4420 -i 4 00:14:02.748 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:02.748 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:02.748 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:02.748 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:02.748 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:04.660 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:04.660 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:04.660 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:04.660 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:04.660 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:04.660 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:04.660 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:04.660 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:04.660 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:04.660 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:04.660 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:04.660 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.660 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:04.660 [ 0]:0x1 00:14:04.660 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.660 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:04.921 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a868eb4dfe77472f9082f6bc3733d24c 00:14:04.921 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a868eb4dfe77472f9082f6bc3733d24c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.921 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:04.921 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:04.921 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.921 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:04.921 [ 0]:0x1 00:14:04.921 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:04.921 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.921 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a868eb4dfe77472f9082f6bc3733d24c 00:14:04.921 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a868eb4dfe77472f9082f6bc3733d24c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.921 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:04.921 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.921 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:05.181 [ 1]:0x2 00:14:05.181 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:05.181 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:05.181 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6dac4ac4cd2b40c89a11a66cbed61633 00:14:05.181 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6dac4ac4cd2b40c89a11a66cbed61633 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.181 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:05.181 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:05.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.181 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.442 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:05.442 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:05.442 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 01b077e9-bae1-401d-a11f-5d5ff6904c63 -a 10.0.0.2 -s 4420 -i 4 00:14:05.701 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:05.702 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:05.702 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:05.702 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:14:05.702 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:14:05.702 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:07.612 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:07.612 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:07.612 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:07.612 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:07.612 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:07.612 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:07.612 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:07.612 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:07.872 [ 0]:0x2 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:07.872 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.872 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6dac4ac4cd2b40c89a11a66cbed61633 00:14:07.872 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6dac4ac4cd2b40c89a11a66cbed61633 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.872 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:08.132 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:08.132 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.132 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:08.132 [ 0]:0x1 00:14:08.132 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.132 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.132 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a868eb4dfe77472f9082f6bc3733d24c 00:14:08.132 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a868eb4dfe77472f9082f6bc3733d24c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.132 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:08.132 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.132 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:08.132 [ 1]:0x2 00:14:08.132 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:08.132 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.132 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6dac4ac4cd2b40c89a11a66cbed61633 00:14:08.132 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6dac4ac4cd2b40c89a11a66cbed61633 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.132 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:08.392 [ 0]:0x2 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:08.392 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:08.652 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6dac4ac4cd2b40c89a11a66cbed61633 00:14:08.652 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6dac4ac4cd2b40c89a11a66cbed61633 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:08.652 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:08.652 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:08.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.652 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:08.652 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:08.652 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 01b077e9-bae1-401d-a11f-5d5ff6904c63 -a 10.0.0.2 -s 4420 -i 4 00:14:08.912 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:08.912 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:08.912 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.912 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:08.912 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:08.912 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:11.455 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:11.455 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:11.455 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.455 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:11.456 [ 0]:0x1 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a868eb4dfe77472f9082f6bc3733d24c 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a868eb4dfe77472f9082f6bc3733d24c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:11.456 [ 1]:0x2 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6dac4ac4cd2b40c89a11a66cbed61633 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6dac4ac4cd2b40c89a11a66cbed61633 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.456 [ 0]:0x2 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6dac4ac4cd2b40c89a11a66cbed61633 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6dac4ac4cd2b40c89a11a66cbed61633 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:11.456 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:11.717 [2024-11-20 07:28:29.742880] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:11.717 request: 00:14:11.717 { 00:14:11.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.717 "nsid": 2, 00:14:11.717 "host": "nqn.2016-06.io.spdk:host1", 00:14:11.717 "method": "nvmf_ns_remove_host", 00:14:11.717 "req_id": 1 00:14:11.717 } 00:14:11.717 Got JSON-RPC error response 00:14:11.717 response: 00:14:11.717 { 00:14:11.717 "code": -32602, 00:14:11.717 "message": "Invalid parameters" 00:14:11.717 } 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:11.717 [ 0]:0x2 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.717 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.978 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6dac4ac4cd2b40c89a11a66cbed61633 00:14:11.978 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6dac4ac4cd2b40c89a11a66cbed61633 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.978 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:11.978 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.978 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3340862 00:14:11.978 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:11.978 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.978 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3340862 /var/tmp/host.sock 00:14:11.978 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3340862 ']' 00:14:11.978 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:14:11.978 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:11.978 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:11.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:11.978 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:11.978 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.978 [2024-11-20 07:28:30.053879] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:14:11.978 [2024-11-20 07:28:30.053934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3340862 ] 00:14:11.978 [2024-11-20 07:28:30.146865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.978 [2024-11-20 07:28:30.183064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.920 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:12.920 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:12.920 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.920 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:13.181 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6328ddb1-e348-4f2a-91c4-a7f9c985e952 00:14:13.181 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:13.181 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6328DDB1E3484F2A91C4A7F9C985E952 -i 00:14:13.442 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 360a9d82-07c5-4c57-94ea-c08a21a18462 00:14:13.442 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:13.442 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 360A9D8207C54C5794EAC08A21A18462 -i 00:14:13.442 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:13.703 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:13.963 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:13.963 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:14.223 nvme0n1 00:14:14.223 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:14.223 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:14.483 nvme1n2 00:14:14.483 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:14.483 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:14.483 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:14.483 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:14.483 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:14.743 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:14.743 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:14.743 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:14.743 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:15.003 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6328ddb1-e348-4f2a-91c4-a7f9c985e952 == \6\3\2\8\d\d\b\1\-\e\3\4\8\-\4\f\2\a\-\9\1\c\4\-\a\7\f\9\c\9\8\5\e\9\5\2 ]] 00:14:15.003 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:15.003 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:15.003 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:15.003 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 360a9d82-07c5-4c57-94ea-c08a21a18462 == \3\6\0\a\9\d\8\2\-\0\7\c\5\-\4\c\5\7\-\9\4\e\a\-\c\0\8\a\2\1\a\1\8\4\6\2 ]] 00:14:15.003 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.264 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 6328ddb1-e348-4f2a-91c4-a7f9c985e952 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6328DDB1E3484F2A91C4A7F9C985E952 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6328DDB1E3484F2A91C4A7F9C985E952 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6328DDB1E3484F2A91C4A7F9C985E952 00:14:15.525 [2024-11-20 07:28:33.685174] bdev.c:8477:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:15.525 [2024-11-20 07:28:33.685203] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:15.525 [2024-11-20 07:28:33.685210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.525 request: 00:14:15.525 { 00:14:15.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.525 "namespace": { 00:14:15.525 "bdev_name": "invalid", 00:14:15.525 "nsid": 1, 00:14:15.525 "nguid": "6328DDB1E3484F2A91C4A7F9C985E952", 00:14:15.525 "no_auto_visible": false 00:14:15.525 }, 00:14:15.525 "method": "nvmf_subsystem_add_ns", 00:14:15.525 "req_id": 1 00:14:15.525 } 00:14:15.525 Got JSON-RPC error response 00:14:15.525 response: 00:14:15.525 { 00:14:15.525 "code": -32602, 00:14:15.525 "message": "Invalid parameters" 00:14:15.525 } 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 6328ddb1-e348-4f2a-91c4-a7f9c985e952 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:15.525 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6328DDB1E3484F2A91C4A7F9C985E952 -i 00:14:15.785 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:17.697 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:17.697 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:17.697 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:17.958 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:17.958 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3340862 00:14:17.958 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3340862 ']' 00:14:17.958 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3340862 00:14:17.958 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:17.958 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:17.958 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3340862 00:14:17.958 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:17.958 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:17.958 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3340862' 00:14:17.958 killing process with pid 3340862 00:14:17.958 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3340862 00:14:17.958 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3340862 00:14:18.219 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:18.479 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:18.479 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:18.479 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:18.479 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:18.479 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:18.479 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:18.479 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:18.479 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:18.479 rmmod nvme_tcp 00:14:18.479 rmmod nvme_fabrics 00:14:18.479 rmmod nvme_keyring 00:14:18.479 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:18.480 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:18.480 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:18.480 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3338656 ']' 00:14:18.480 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3338656 00:14:18.480 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3338656 ']' 00:14:18.480 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3338656 00:14:18.480 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:18.480 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:18.480 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3338656 00:14:18.480 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:18.480 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:18.480 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3338656' 00:14:18.480 killing process with pid 3338656 00:14:18.480 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3338656 00:14:18.480 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3338656 00:14:18.741 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:18.741 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:18.741 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:18.741 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:18.741 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:18.741 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:18.741 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:18.741 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:18.741 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:18.741 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.741 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.741 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.655 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:20.655 00:14:20.655 real 0m28.315s 00:14:20.655 user 0m32.060s 00:14:20.655 sys 0m8.402s 00:14:20.655 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:20.655 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:20.655 ************************************ 00:14:20.655 END TEST nvmf_ns_masking 00:14:20.655 ************************************ 00:14:20.655 07:28:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:20.655 07:28:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:20.655 07:28:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:20.655 07:28:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:20.655 07:28:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:20.917 ************************************ 00:14:20.917 START TEST nvmf_nvme_cli 00:14:20.917 ************************************ 00:14:20.917 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:20.917 * Looking for test storage... 00:14:20.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:20.917 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:20.918 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:20.918 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:20.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.918 --rc genhtml_branch_coverage=1 00:14:20.918 --rc genhtml_function_coverage=1 00:14:20.918 --rc genhtml_legend=1 00:14:20.918 --rc geninfo_all_blocks=1 00:14:20.918 --rc geninfo_unexecuted_blocks=1 00:14:20.918 00:14:20.918 ' 00:14:20.918 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:20.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.918 --rc genhtml_branch_coverage=1 00:14:20.918 --rc genhtml_function_coverage=1 00:14:20.918 --rc genhtml_legend=1 00:14:20.918 --rc geninfo_all_blocks=1 00:14:20.918 --rc geninfo_unexecuted_blocks=1 00:14:20.918 00:14:20.918 ' 00:14:20.918 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:20.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.918 --rc genhtml_branch_coverage=1 00:14:20.918 --rc genhtml_function_coverage=1 00:14:20.918 --rc genhtml_legend=1 00:14:20.918 --rc geninfo_all_blocks=1 00:14:20.918 --rc geninfo_unexecuted_blocks=1 00:14:20.918 00:14:20.918 ' 00:14:20.918 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:20.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.918 --rc genhtml_branch_coverage=1 00:14:20.918 --rc genhtml_function_coverage=1 00:14:20.918 --rc genhtml_legend=1 00:14:20.918 --rc geninfo_all_blocks=1 00:14:20.918 --rc geninfo_unexecuted_blocks=1 00:14:20.918 00:14:20.918 ' 00:14:20.918 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.918 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:20.918 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.918 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.918 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.918 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.918 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.918 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.918 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.918 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.918 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.918 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.180 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:21.180 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:21.180 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.180 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.180 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.180 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.180 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.180 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:21.180 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.180 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.180 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.180 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.180 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.180 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.180 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:21.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:21.181 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:29.320 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:29.320 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:29.320 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:29.321 Found net devices under 0000:31:00.0: cvl_0_0 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:29.321 Found net devices under 0000:31:00.1: cvl_0_1 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:29.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:14:29.321 00:14:29.321 --- 10.0.0.2 ping statistics --- 00:14:29.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.321 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:14:29.321 00:14:29.321 --- 10.0.0.1 ping statistics --- 00:14:29.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.321 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3346586 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3346586 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 3346586 ']' 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:29.321 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.321 [2024-11-20 07:28:46.789487] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:14:29.321 [2024-11-20 07:28:46.789549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.321 [2024-11-20 07:28:46.889967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:29.321 [2024-11-20 07:28:46.944605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.321 [2024-11-20 07:28:46.944659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.321 [2024-11-20 07:28:46.944668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.321 [2024-11-20 07:28:46.944675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.321 [2024-11-20 07:28:46.944681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.321 [2024-11-20 07:28:46.947103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.321 [2024-11-20 07:28:46.947263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.321 [2024-11-20 07:28:46.947426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.321 [2024-11-20 07:28:46.947426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.622 [2024-11-20 07:28:47.672716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.622 Malloc0 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.622 Malloc1 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.622 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.623 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.623 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.623 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.623 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.623 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.623 [2024-11-20 07:28:47.788145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.623 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.623 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:29.623 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.623 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.623 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.623 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 4420 00:14:29.943 00:14:29.943 Discovery Log Number of Records 2, Generation counter 2 00:14:29.943 =====Discovery Log Entry 0====== 00:14:29.943 trtype: tcp 00:14:29.943 adrfam: ipv4 00:14:29.943 subtype: current discovery subsystem 00:14:29.943 treq: not required 00:14:29.943 portid: 0 00:14:29.943 trsvcid: 4420 00:14:29.943 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:29.943 traddr: 10.0.0.2 00:14:29.943 eflags: explicit discovery connections, duplicate discovery information 00:14:29.943 sectype: none 00:14:29.943 =====Discovery Log Entry 1====== 00:14:29.943 trtype: tcp 00:14:29.943 adrfam: ipv4 00:14:29.943 subtype: nvme subsystem 00:14:29.943 treq: not required 00:14:29.943 portid: 0 00:14:29.943 trsvcid: 4420 00:14:29.943 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:29.943 traddr: 10.0.0.2 00:14:29.943 eflags: none 00:14:29.943 sectype: none 00:14:29.943 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:29.943 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:29.943 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:29.943 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:29.943 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:29.943 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:29.943 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:29.943 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:29.943 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:29.943 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:29.943 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:31.333 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:31.333 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:14:31.333 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:31.333 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:31.333 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:31.333 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:33.874 /dev/nvme0n2 ]] 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:33.874 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:34.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.134 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:34.134 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:14:34.134 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:34.134 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.134 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:34.134 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.134 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:14:34.134 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:34.134 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.134 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.134 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:34.134 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.134 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:34.134 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:34.134 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:34.134 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:34.134 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:34.135 rmmod nvme_tcp 00:14:34.135 rmmod nvme_fabrics 00:14:34.135 rmmod nvme_keyring 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3346586 ']' 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3346586 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 3346586 ']' 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 3346586 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3346586 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3346586' 00:14:34.135 killing process with pid 3346586 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 3346586 00:14:34.135 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 3346586 00:14:34.396 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:34.396 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:34.396 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:34.396 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:34.396 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:34.396 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:34.396 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:34.396 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:34.396 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:34.396 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.396 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.396 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.307 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:36.307 00:14:36.307 real 0m15.607s 00:14:36.307 user 0m24.168s 00:14:36.307 sys 0m6.397s 00:14:36.307 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:36.307 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.307 ************************************ 00:14:36.307 END TEST nvmf_nvme_cli 00:14:36.307 ************************************ 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:36.567 ************************************ 00:14:36.567 START TEST nvmf_vfio_user 00:14:36.567 ************************************ 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:36.567 * Looking for test storage... 00:14:36.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.567 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:36.828 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:36.828 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.828 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:36.828 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:36.828 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:36.828 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:36.828 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.828 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:36.828 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:36.828 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:36.828 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:36.828 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:36.828 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.828 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:36.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.828 --rc genhtml_branch_coverage=1 00:14:36.828 --rc genhtml_function_coverage=1 00:14:36.828 --rc genhtml_legend=1 00:14:36.828 --rc geninfo_all_blocks=1 00:14:36.828 --rc geninfo_unexecuted_blocks=1 00:14:36.828 00:14:36.828 ' 00:14:36.828 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:36.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.828 --rc genhtml_branch_coverage=1 00:14:36.828 --rc genhtml_function_coverage=1 00:14:36.829 --rc genhtml_legend=1 00:14:36.829 --rc geninfo_all_blocks=1 00:14:36.829 --rc geninfo_unexecuted_blocks=1 00:14:36.829 00:14:36.829 ' 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:36.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.829 --rc genhtml_branch_coverage=1 00:14:36.829 --rc genhtml_function_coverage=1 00:14:36.829 --rc genhtml_legend=1 00:14:36.829 --rc geninfo_all_blocks=1 00:14:36.829 --rc geninfo_unexecuted_blocks=1 00:14:36.829 00:14:36.829 ' 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:36.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.829 --rc genhtml_branch_coverage=1 00:14:36.829 --rc genhtml_function_coverage=1 00:14:36.829 --rc genhtml_legend=1 00:14:36.829 --rc geninfo_all_blocks=1 00:14:36.829 --rc geninfo_unexecuted_blocks=1 00:14:36.829 00:14:36.829 ' 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:36.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3348109 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3348109' 00:14:36.829 Process pid: 3348109 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3348109 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 3348109 ']' 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:36.829 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:36.829 [2024-11-20 07:28:54.876867] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:14:36.829 [2024-11-20 07:28:54.876921] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.829 [2024-11-20 07:28:54.963443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:36.829 [2024-11-20 07:28:54.997409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.829 [2024-11-20 07:28:54.997440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.829 [2024-11-20 07:28:54.997446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.829 [2024-11-20 07:28:54.997451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.829 [2024-11-20 07:28:54.997455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.829 [2024-11-20 07:28:54.998781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.829 [2024-11-20 07:28:54.998869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.829 [2024-11-20 07:28:54.999021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.829 [2024-11-20 07:28:54.999022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:37.770 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:37.770 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:14:37.770 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:38.712 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:38.712 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:38.712 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:38.712 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:38.712 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:38.712 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:38.973 Malloc1 00:14:38.973 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:39.234 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:39.495 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:39.495 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:39.495 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:39.495 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:39.754 Malloc2 00:14:39.754 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:40.016 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:40.016 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:40.277 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:40.277 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:40.277 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:40.277 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:40.277 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:40.277 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:40.277 [2024-11-20 07:28:58.403605] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:14:40.277 [2024-11-20 07:28:58.403645] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348880 ] 00:14:40.277 [2024-11-20 07:28:58.442065] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:40.277 [2024-11-20 07:28:58.450991] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:40.277 [2024-11-20 07:28:58.451008] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f28cd8c9000 00:14:40.277 [2024-11-20 07:28:58.451990] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.277 [2024-11-20 07:28:58.452989] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.277 [2024-11-20 07:28:58.453992] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.277 [2024-11-20 07:28:58.455001] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:40.277 [2024-11-20 07:28:58.456007] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:40.277 [2024-11-20 07:28:58.457014] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.277 [2024-11-20 07:28:58.458014] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:40.277 [2024-11-20 07:28:58.459022] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.277 [2024-11-20 07:28:58.460022] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:40.277 [2024-11-20 07:28:58.460029] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f28cd8be000 00:14:40.277 [2024-11-20 07:28:58.460942] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:40.277 [2024-11-20 07:28:58.470398] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:40.277 [2024-11-20 07:28:58.470425] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:40.277 [2024-11-20 07:28:58.475110] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:40.277 [2024-11-20 07:28:58.475144] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:40.277 [2024-11-20 07:28:58.475203] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:40.277 [2024-11-20 07:28:58.475217] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:40.277 [2024-11-20 07:28:58.475221] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:40.277 [2024-11-20 07:28:58.476115] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:40.277 [2024-11-20 07:28:58.476123] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:40.277 [2024-11-20 07:28:58.476128] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:40.277 [2024-11-20 07:28:58.477118] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:40.277 [2024-11-20 07:28:58.477125] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:40.277 [2024-11-20 07:28:58.477131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:40.277 [2024-11-20 07:28:58.478126] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:40.277 [2024-11-20 07:28:58.478132] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:40.277 [2024-11-20 07:28:58.479130] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:40.277 [2024-11-20 07:28:58.479137] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:40.277 [2024-11-20 07:28:58.479141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:40.277 [2024-11-20 07:28:58.479145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:40.277 [2024-11-20 07:28:58.479252] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:40.277 [2024-11-20 07:28:58.479257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:40.277 [2024-11-20 07:28:58.479261] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:40.277 [2024-11-20 07:28:58.480142] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:40.277 [2024-11-20 07:28:58.481144] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:40.277 [2024-11-20 07:28:58.482150] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:40.541 [2024-11-20 07:28:58.483146] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:40.541 [2024-11-20 07:28:58.483211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:40.541 [2024-11-20 07:28:58.484155] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:40.541 [2024-11-20 07:28:58.484161] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:40.541 [2024-11-20 07:28:58.484164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:40.541 [2024-11-20 07:28:58.484180] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:40.541 [2024-11-20 07:28:58.484191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:40.541 [2024-11-20 07:28:58.484204] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:40.541 [2024-11-20 07:28:58.484207] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:40.541 [2024-11-20 07:28:58.484210] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.541 [2024-11-20 07:28:58.484221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:40.541 [2024-11-20 07:28:58.484256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:40.541 [2024-11-20 07:28:58.484264] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:40.541 [2024-11-20 07:28:58.484268] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:40.541 [2024-11-20 07:28:58.484271] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:40.541 [2024-11-20 07:28:58.484275] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:40.541 [2024-11-20 07:28:58.484281] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:40.541 [2024-11-20 07:28:58.484285] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:40.541 [2024-11-20 07:28:58.484289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:40.541 [2024-11-20 07:28:58.484296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:40.542 [2024-11-20 07:28:58.484318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:40.542 [2024-11-20 07:28:58.484327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.542 [2024-11-20 07:28:58.484333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.542 [2024-11-20 07:28:58.484339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.542 [2024-11-20 07:28:58.484345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.542 [2024-11-20 07:28:58.484348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484359] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:40.542 [2024-11-20 07:28:58.484367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:40.542 [2024-11-20 07:28:58.484373] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:40.542 [2024-11-20 07:28:58.484377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:40.542 [2024-11-20 07:28:58.484401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:40.542 [2024-11-20 07:28:58.484444] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484455] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:40.542 [2024-11-20 07:28:58.484459] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:40.542 [2024-11-20 07:28:58.484461] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.542 [2024-11-20 07:28:58.484465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:40.542 [2024-11-20 07:28:58.484475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:40.542 [2024-11-20 07:28:58.484482] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:40.542 [2024-11-20 07:28:58.484489] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484501] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:40.542 [2024-11-20 07:28:58.484504] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:40.542 [2024-11-20 07:28:58.484506] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.542 [2024-11-20 07:28:58.484511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:40.542 [2024-11-20 07:28:58.484528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:40.542 [2024-11-20 07:28:58.484538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484544] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484549] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:40.542 [2024-11-20 07:28:58.484552] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:40.542 [2024-11-20 07:28:58.484554] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.542 [2024-11-20 07:28:58.484558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:40.542 [2024-11-20 07:28:58.484569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:40.542 [2024-11-20 07:28:58.484575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484601] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:40.542 [2024-11-20 07:28:58.484604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:40.542 [2024-11-20 07:28:58.484608] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:40.542 [2024-11-20 07:28:58.484622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:40.542 [2024-11-20 07:28:58.484629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:40.542 [2024-11-20 07:28:58.484638] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:40.542 [2024-11-20 07:28:58.484645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:40.542 [2024-11-20 07:28:58.484653] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:40.542 [2024-11-20 07:28:58.484661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:40.542 [2024-11-20 07:28:58.484669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:40.542 [2024-11-20 07:28:58.484676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:40.542 [2024-11-20 07:28:58.484686] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:40.542 [2024-11-20 07:28:58.484689] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:40.542 [2024-11-20 07:28:58.484692] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:40.542 [2024-11-20 07:28:58.484694] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:40.542 [2024-11-20 07:28:58.484696] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:40.542 [2024-11-20 07:28:58.484701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:40.542 [2024-11-20 07:28:58.484706] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:40.542 [2024-11-20 07:28:58.484709] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:40.542 [2024-11-20 07:28:58.484712] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.542 [2024-11-20 07:28:58.484716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:40.542 [2024-11-20 07:28:58.484721] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:40.542 [2024-11-20 07:28:58.484724] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:40.542 [2024-11-20 07:28:58.484727] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.542 [2024-11-20 07:28:58.484731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:40.542 [2024-11-20 07:28:58.484736] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:40.542 [2024-11-20 07:28:58.484740] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:40.542 [2024-11-20 07:28:58.484742] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.542 [2024-11-20 07:28:58.484750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:40.542 [2024-11-20 07:28:58.484755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:40.542 [2024-11-20 07:28:58.484764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:40.542 [2024-11-20 07:28:58.484773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:40.542 [2024-11-20 07:28:58.484778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:40.542 ===================================================== 00:14:40.542 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:40.542 ===================================================== 00:14:40.542 Controller Capabilities/Features 00:14:40.542 ================================ 00:14:40.542 Vendor ID: 4e58 00:14:40.542 Subsystem Vendor ID: 4e58 00:14:40.542 Serial Number: SPDK1 00:14:40.542 Model Number: SPDK bdev Controller 00:14:40.542 Firmware Version: 25.01 00:14:40.543 Recommended Arb Burst: 6 00:14:40.543 IEEE OUI Identifier: 8d 6b 50 00:14:40.543 Multi-path I/O 00:14:40.543 May have multiple subsystem ports: Yes 00:14:40.543 May have multiple controllers: Yes 00:14:40.543 Associated with SR-IOV VF: No 00:14:40.543 Max Data Transfer Size: 131072 00:14:40.543 Max Number of Namespaces: 32 00:14:40.543 Max Number of I/O Queues: 127 00:14:40.543 NVMe Specification Version (VS): 1.3 00:14:40.543 NVMe Specification Version (Identify): 1.3 00:14:40.543 Maximum Queue Entries: 256 00:14:40.543 Contiguous Queues Required: Yes 00:14:40.543 Arbitration Mechanisms Supported 00:14:40.543 Weighted Round Robin: Not Supported 00:14:40.543 Vendor Specific: Not Supported 00:14:40.543 Reset Timeout: 15000 ms 00:14:40.543 Doorbell Stride: 4 bytes 00:14:40.543 NVM Subsystem Reset: Not Supported 00:14:40.543 Command Sets Supported 00:14:40.543 NVM Command Set: Supported 00:14:40.543 Boot Partition: Not Supported 00:14:40.543 Memory Page Size Minimum: 4096 bytes 00:14:40.543 Memory Page Size Maximum: 4096 bytes 00:14:40.543 Persistent Memory Region: Not Supported 00:14:40.543 Optional Asynchronous Events Supported 00:14:40.543 Namespace Attribute Notices: Supported 00:14:40.543 Firmware Activation Notices: Not Supported 00:14:40.543 ANA Change Notices: Not Supported 00:14:40.543 PLE Aggregate Log Change Notices: Not Supported 00:14:40.543 LBA Status Info Alert Notices: Not Supported 00:14:40.543 EGE Aggregate Log Change Notices: Not Supported 00:14:40.543 Normal NVM Subsystem Shutdown event: Not Supported 00:14:40.543 Zone Descriptor Change Notices: Not Supported 00:14:40.543 Discovery Log Change Notices: Not Supported 00:14:40.543 Controller Attributes 00:14:40.543 128-bit Host Identifier: Supported 00:14:40.543 Non-Operational Permissive Mode: Not Supported 00:14:40.543 NVM Sets: Not Supported 00:14:40.543 Read Recovery Levels: Not Supported 00:14:40.543 Endurance Groups: Not Supported 00:14:40.543 Predictable Latency Mode: Not Supported 00:14:40.543 Traffic Based Keep ALive: Not Supported 00:14:40.543 Namespace Granularity: Not Supported 00:14:40.543 SQ Associations: Not Supported 00:14:40.543 UUID List: Not Supported 00:14:40.543 Multi-Domain Subsystem: Not Supported 00:14:40.543 Fixed Capacity Management: Not Supported 00:14:40.543 Variable Capacity Management: Not Supported 00:14:40.543 Delete Endurance Group: Not Supported 00:14:40.543 Delete NVM Set: Not Supported 00:14:40.543 Extended LBA Formats Supported: Not Supported 00:14:40.543 Flexible Data Placement Supported: Not Supported 00:14:40.543 00:14:40.543 Controller Memory Buffer Support 00:14:40.543 ================================ 00:14:40.543 Supported: No 00:14:40.543 00:14:40.543 Persistent Memory Region Support 00:14:40.543 ================================ 00:14:40.543 Supported: No 00:14:40.543 00:14:40.543 Admin Command Set Attributes 00:14:40.543 ============================ 00:14:40.543 Security Send/Receive: Not Supported 00:14:40.543 Format NVM: Not Supported 00:14:40.543 Firmware Activate/Download: Not Supported 00:14:40.543 Namespace Management: Not Supported 00:14:40.543 Device Self-Test: Not Supported 00:14:40.543 Directives: Not Supported 00:14:40.543 NVMe-MI: Not Supported 00:14:40.543 Virtualization Management: Not Supported 00:14:40.543 Doorbell Buffer Config: Not Supported 00:14:40.543 Get LBA Status Capability: Not Supported 00:14:40.543 Command & Feature Lockdown Capability: Not Supported 00:14:40.543 Abort Command Limit: 4 00:14:40.543 Async Event Request Limit: 4 00:14:40.543 Number of Firmware Slots: N/A 00:14:40.543 Firmware Slot 1 Read-Only: N/A 00:14:40.543 Firmware Activation Without Reset: N/A 00:14:40.543 Multiple Update Detection Support: N/A 00:14:40.543 Firmware Update Granularity: No Information Provided 00:14:40.543 Per-Namespace SMART Log: No 00:14:40.543 Asymmetric Namespace Access Log Page: Not Supported 00:14:40.543 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:40.543 Command Effects Log Page: Supported 00:14:40.543 Get Log Page Extended Data: Supported 00:14:40.543 Telemetry Log Pages: Not Supported 00:14:40.543 Persistent Event Log Pages: Not Supported 00:14:40.543 Supported Log Pages Log Page: May Support 00:14:40.543 Commands Supported & Effects Log Page: Not Supported 00:14:40.543 Feature Identifiers & Effects Log Page:May Support 00:14:40.543 NVMe-MI Commands & Effects Log Page: May Support 00:14:40.543 Data Area 4 for Telemetry Log: Not Supported 00:14:40.543 Error Log Page Entries Supported: 128 00:14:40.543 Keep Alive: Supported 00:14:40.543 Keep Alive Granularity: 10000 ms 00:14:40.543 00:14:40.543 NVM Command Set Attributes 00:14:40.543 ========================== 00:14:40.543 Submission Queue Entry Size 00:14:40.543 Max: 64 00:14:40.543 Min: 64 00:14:40.543 Completion Queue Entry Size 00:14:40.543 Max: 16 00:14:40.543 Min: 16 00:14:40.543 Number of Namespaces: 32 00:14:40.543 Compare Command: Supported 00:14:40.543 Write Uncorrectable Command: Not Supported 00:14:40.543 Dataset Management Command: Supported 00:14:40.543 Write Zeroes Command: Supported 00:14:40.543 Set Features Save Field: Not Supported 00:14:40.543 Reservations: Not Supported 00:14:40.543 Timestamp: Not Supported 00:14:40.543 Copy: Supported 00:14:40.543 Volatile Write Cache: Present 00:14:40.543 Atomic Write Unit (Normal): 1 00:14:40.543 Atomic Write Unit (PFail): 1 00:14:40.543 Atomic Compare & Write Unit: 1 00:14:40.543 Fused Compare & Write: Supported 00:14:40.543 Scatter-Gather List 00:14:40.543 SGL Command Set: Supported (Dword aligned) 00:14:40.543 SGL Keyed: Not Supported 00:14:40.543 SGL Bit Bucket Descriptor: Not Supported 00:14:40.543 SGL Metadata Pointer: Not Supported 00:14:40.543 Oversized SGL: Not Supported 00:14:40.543 SGL Metadata Address: Not Supported 00:14:40.543 SGL Offset: Not Supported 00:14:40.543 Transport SGL Data Block: Not Supported 00:14:40.543 Replay Protected Memory Block: Not Supported 00:14:40.543 00:14:40.543 Firmware Slot Information 00:14:40.543 ========================= 00:14:40.543 Active slot: 1 00:14:40.543 Slot 1 Firmware Revision: 25.01 00:14:40.543 00:14:40.543 00:14:40.543 Commands Supported and Effects 00:14:40.543 ============================== 00:14:40.543 Admin Commands 00:14:40.543 -------------- 00:14:40.543 Get Log Page (02h): Supported 00:14:40.543 Identify (06h): Supported 00:14:40.543 Abort (08h): Supported 00:14:40.543 Set Features (09h): Supported 00:14:40.543 Get Features (0Ah): Supported 00:14:40.543 Asynchronous Event Request (0Ch): Supported 00:14:40.543 Keep Alive (18h): Supported 00:14:40.543 I/O Commands 00:14:40.543 ------------ 00:14:40.543 Flush (00h): Supported LBA-Change 00:14:40.543 Write (01h): Supported LBA-Change 00:14:40.543 Read (02h): Supported 00:14:40.543 Compare (05h): Supported 00:14:40.543 Write Zeroes (08h): Supported LBA-Change 00:14:40.543 Dataset Management (09h): Supported LBA-Change 00:14:40.543 Copy (19h): Supported LBA-Change 00:14:40.543 00:14:40.543 Error Log 00:14:40.543 ========= 00:14:40.543 00:14:40.543 Arbitration 00:14:40.543 =========== 00:14:40.543 Arbitration Burst: 1 00:14:40.543 00:14:40.543 Power Management 00:14:40.543 ================ 00:14:40.543 Number of Power States: 1 00:14:40.543 Current Power State: Power State #0 00:14:40.543 Power State #0: 00:14:40.543 Max Power: 0.00 W 00:14:40.543 Non-Operational State: Operational 00:14:40.543 Entry Latency: Not Reported 00:14:40.543 Exit Latency: Not Reported 00:14:40.543 Relative Read Throughput: 0 00:14:40.543 Relative Read Latency: 0 00:14:40.543 Relative Write Throughput: 0 00:14:40.543 Relative Write Latency: 0 00:14:40.543 Idle Power: Not Reported 00:14:40.543 Active Power: Not Reported 00:14:40.543 Non-Operational Permissive Mode: Not Supported 00:14:40.543 00:14:40.543 Health Information 00:14:40.543 ================== 00:14:40.543 Critical Warnings: 00:14:40.543 Available Spare Space: OK 00:14:40.543 Temperature: OK 00:14:40.544 Device Reliability: OK 00:14:40.544 Read Only: No 00:14:40.544 Volatile Memory Backup: OK 00:14:40.544 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:40.544 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:40.544 Available Spare: 0% 00:14:40.544 Available Sp[2024-11-20 07:28:58.484857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:40.544 [2024-11-20 07:28:58.484868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:40.544 [2024-11-20 07:28:58.484891] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:40.544 [2024-11-20 07:28:58.484898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.544 [2024-11-20 07:28:58.484903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.544 [2024-11-20 07:28:58.484907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.544 [2024-11-20 07:28:58.484912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.544 [2024-11-20 07:28:58.488751] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:40.544 [2024-11-20 07:28:58.488760] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:40.544 [2024-11-20 07:28:58.489177] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:40.544 [2024-11-20 07:28:58.489217] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:40.544 [2024-11-20 07:28:58.489221] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:40.544 [2024-11-20 07:28:58.490180] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:40.544 [2024-11-20 07:28:58.490188] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:40.544 [2024-11-20 07:28:58.490239] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:40.544 [2024-11-20 07:28:58.492213] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:40.544 are Threshold: 0% 00:14:40.544 Life Percentage Used: 0% 00:14:40.544 Data Units Read: 0 00:14:40.544 Data Units Written: 0 00:14:40.544 Host Read Commands: 0 00:14:40.544 Host Write Commands: 0 00:14:40.544 Controller Busy Time: 0 minutes 00:14:40.544 Power Cycles: 0 00:14:40.544 Power On Hours: 0 hours 00:14:40.544 Unsafe Shutdowns: 0 00:14:40.544 Unrecoverable Media Errors: 0 00:14:40.544 Lifetime Error Log Entries: 0 00:14:40.544 Warning Temperature Time: 0 minutes 00:14:40.544 Critical Temperature Time: 0 minutes 00:14:40.544 00:14:40.544 Number of Queues 00:14:40.544 ================ 00:14:40.544 Number of I/O Submission Queues: 127 00:14:40.544 Number of I/O Completion Queues: 127 00:14:40.544 00:14:40.544 Active Namespaces 00:14:40.544 ================= 00:14:40.544 Namespace ID:1 00:14:40.544 Error Recovery Timeout: Unlimited 00:14:40.544 Command Set Identifier: NVM (00h) 00:14:40.544 Deallocate: Supported 00:14:40.544 Deallocated/Unwritten Error: Not Supported 00:14:40.544 Deallocated Read Value: Unknown 00:14:40.544 Deallocate in Write Zeroes: Not Supported 00:14:40.544 Deallocated Guard Field: 0xFFFF 00:14:40.544 Flush: Supported 00:14:40.544 Reservation: Supported 00:14:40.544 Namespace Sharing Capabilities: Multiple Controllers 00:14:40.544 Size (in LBAs): 131072 (0GiB) 00:14:40.544 Capacity (in LBAs): 131072 (0GiB) 00:14:40.544 Utilization (in LBAs): 131072 (0GiB) 00:14:40.544 NGUID: AAFD9AA0029342819DD704D2BFA4848B 00:14:40.544 UUID: aafd9aa0-0293-4281-9dd7-04d2bfa4848b 00:14:40.544 Thin Provisioning: Not Supported 00:14:40.544 Per-NS Atomic Units: Yes 00:14:40.544 Atomic Boundary Size (Normal): 0 00:14:40.544 Atomic Boundary Size (PFail): 0 00:14:40.544 Atomic Boundary Offset: 0 00:14:40.544 Maximum Single Source Range Length: 65535 00:14:40.544 Maximum Copy Length: 65535 00:14:40.544 Maximum Source Range Count: 1 00:14:40.544 NGUID/EUI64 Never Reused: No 00:14:40.544 Namespace Write Protected: No 00:14:40.544 Number of LBA Formats: 1 00:14:40.544 Current LBA Format: LBA Format #00 00:14:40.544 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:40.544 00:14:40.544 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:40.544 [2024-11-20 07:28:58.679976] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:45.834 Initializing NVMe Controllers 00:14:45.834 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:45.834 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:45.834 Initialization complete. Launching workers. 00:14:45.834 ======================================================== 00:14:45.834 Latency(us) 00:14:45.834 Device Information : IOPS MiB/s Average min max 00:14:45.834 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39983.04 156.18 3201.22 837.23 6805.03 00:14:45.834 ======================================================== 00:14:45.834 Total : 39983.04 156.18 3201.22 837.23 6805.03 00:14:45.834 00:14:45.834 [2024-11-20 07:29:03.699481] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:45.834 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:45.834 [2024-11-20 07:29:03.895303] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:51.124 Initializing NVMe Controllers 00:14:51.124 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:51.124 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:51.124 Initialization complete. Launching workers. 00:14:51.124 ======================================================== 00:14:51.124 Latency(us) 00:14:51.124 Device Information : IOPS MiB/s Average min max 00:14:51.124 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16056.42 62.72 7977.42 5772.64 8975.21 00:14:51.124 ======================================================== 00:14:51.124 Total : 16056.42 62.72 7977.42 5772.64 8975.21 00:14:51.124 00:14:51.124 [2024-11-20 07:29:08.934271] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:51.124 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:51.124 [2024-11-20 07:29:09.147121] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:56.408 [2024-11-20 07:29:14.223976] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:56.408 Initializing NVMe Controllers 00:14:56.408 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:56.408 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:56.408 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:56.408 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:56.408 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:56.408 Initialization complete. Launching workers. 00:14:56.408 Starting thread on core 2 00:14:56.408 Starting thread on core 3 00:14:56.408 Starting thread on core 1 00:14:56.408 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:56.409 [2024-11-20 07:29:14.475313] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:59.710 [2024-11-20 07:29:17.540569] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:59.710 Initializing NVMe Controllers 00:14:59.710 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.710 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.710 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:59.710 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:59.710 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:59.710 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:59.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:59.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:59.711 Initialization complete. Launching workers. 00:14:59.711 Starting thread on core 1 with urgent priority queue 00:14:59.711 Starting thread on core 2 with urgent priority queue 00:14:59.711 Starting thread on core 3 with urgent priority queue 00:14:59.711 Starting thread on core 0 with urgent priority queue 00:14:59.711 SPDK bdev Controller (SPDK1 ) core 0: 9491.00 IO/s 10.54 secs/100000 ios 00:14:59.711 SPDK bdev Controller (SPDK1 ) core 1: 12129.67 IO/s 8.24 secs/100000 ios 00:14:59.711 SPDK bdev Controller (SPDK1 ) core 2: 9056.00 IO/s 11.04 secs/100000 ios 00:14:59.711 SPDK bdev Controller (SPDK1 ) core 3: 12620.67 IO/s 7.92 secs/100000 ios 00:14:59.711 ======================================================== 00:14:59.711 00:14:59.711 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:59.711 [2024-11-20 07:29:17.783139] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:59.711 Initializing NVMe Controllers 00:14:59.711 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.711 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.711 Namespace ID: 1 size: 0GB 00:14:59.711 Initialization complete. 00:14:59.711 INFO: using host memory buffer for IO 00:14:59.711 Hello world! 00:14:59.711 [2024-11-20 07:29:17.817339] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:59.711 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:59.971 [2024-11-20 07:29:18.059119] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.913 Initializing NVMe Controllers 00:15:00.913 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.913 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.913 Initialization complete. Launching workers. 00:15:00.913 submit (in ns) avg, min, max = 5859.2, 2825.0, 3998381.7 00:15:00.913 complete (in ns) avg, min, max = 17708.1, 1648.3, 4004589.2 00:15:00.913 00:15:00.913 Submit histogram 00:15:00.913 ================ 00:15:00.913 Range in us Cumulative Count 00:15:00.913 2.813 - 2.827: 0.0386% ( 8) 00:15:00.913 2.827 - 2.840: 0.8484% ( 168) 00:15:00.913 2.840 - 2.853: 3.2202% ( 492) 00:15:00.913 2.853 - 2.867: 7.0334% ( 791) 00:15:00.913 2.867 - 2.880: 12.3698% ( 1107) 00:15:00.913 2.880 - 2.893: 18.3137% ( 1233) 00:15:00.913 2.893 - 2.907: 23.2212% ( 1018) 00:15:00.913 2.907 - 2.920: 28.4516% ( 1085) 00:15:00.914 2.920 - 2.933: 33.7110% ( 1091) 00:15:00.914 2.933 - 2.947: 38.9125% ( 1079) 00:15:00.914 2.947 - 2.960: 44.9335% ( 1249) 00:15:00.914 2.960 - 2.973: 51.7403% ( 1412) 00:15:00.914 2.973 - 2.987: 59.0243% ( 1511) 00:15:00.914 2.987 - 3.000: 68.2173% ( 1907) 00:15:00.914 3.000 - 3.013: 75.7424% ( 1561) 00:15:00.914 3.013 - 3.027: 82.8047% ( 1465) 00:15:00.914 3.027 - 3.040: 88.6088% ( 1204) 00:15:00.914 3.040 - 3.053: 93.3475% ( 983) 00:15:00.914 3.053 - 3.067: 96.3411% ( 621) 00:15:00.914 3.067 - 3.080: 97.9609% ( 336) 00:15:00.914 3.080 - 3.093: 98.8575% ( 186) 00:15:00.914 3.093 - 3.107: 99.1998% ( 71) 00:15:00.914 3.107 - 3.120: 99.3974% ( 41) 00:15:00.914 3.120 - 3.133: 99.4890% ( 19) 00:15:00.914 3.133 - 3.147: 99.5469% ( 12) 00:15:00.914 3.147 - 3.160: 99.5710% ( 5) 00:15:00.914 3.680 - 3.707: 99.5806% ( 2) 00:15:00.914 3.813 - 3.840: 99.5902% ( 2) 00:15:00.914 3.867 - 3.893: 99.5951% ( 1) 00:15:00.914 4.427 - 4.453: 99.5999% ( 1) 00:15:00.914 4.507 - 4.533: 99.6047% ( 1) 00:15:00.914 4.587 - 4.613: 99.6095% ( 1) 00:15:00.914 4.613 - 4.640: 99.6143% ( 1) 00:15:00.914 4.640 - 4.667: 99.6288% ( 3) 00:15:00.914 4.667 - 4.693: 99.6384% ( 2) 00:15:00.914 4.747 - 4.773: 99.6481% ( 2) 00:15:00.914 4.773 - 4.800: 99.6529% ( 1) 00:15:00.914 4.800 - 4.827: 99.6577% ( 1) 00:15:00.914 4.827 - 4.853: 99.6674% ( 2) 00:15:00.914 4.853 - 4.880: 99.6722% ( 1) 00:15:00.914 4.880 - 4.907: 99.6770% ( 1) 00:15:00.914 4.907 - 4.933: 99.6818% ( 1) 00:15:00.914 4.960 - 4.987: 99.6963% ( 3) 00:15:00.914 4.987 - 5.013: 99.7059% ( 2) 00:15:00.914 5.013 - 5.040: 99.7300% ( 5) 00:15:00.914 5.040 - 5.067: 99.7445% ( 3) 00:15:00.914 5.067 - 5.093: 99.7590% ( 3) 00:15:00.914 5.093 - 5.120: 99.7686% ( 2) 00:15:00.914 5.120 - 5.147: 99.7927% ( 5) 00:15:00.914 5.147 - 5.173: 99.7975% ( 1) 00:15:00.914 5.173 - 5.200: 99.8024% ( 1) 00:15:00.914 5.200 - 5.227: 99.8120% ( 2) 00:15:00.914 5.227 - 5.253: 99.8168% ( 1) 00:15:00.914 5.307 - 5.333: 99.8265% ( 2) 00:15:00.914 5.387 - 5.413: 99.8313% ( 1) 00:15:00.914 5.493 - 5.520: 99.8409% ( 2) 00:15:00.914 5.627 - 5.653: 99.8457% ( 1) 00:15:00.914 5.653 - 5.680: 99.8506% ( 1) 00:15:00.914 5.760 - 5.787: 99.8554% ( 1) 00:15:00.914 5.920 - 5.947: 99.8602% ( 1) 00:15:00.914 6.080 - 6.107: 99.8650% ( 1) 00:15:00.914 6.160 - 6.187: 99.8698% ( 1) 00:15:00.914 6.187 - 6.213: 99.8747% ( 1) 00:15:00.914 6.240 - 6.267: 99.8795% ( 1) 00:15:00.914 6.373 - 6.400: 99.8843% ( 1) 00:15:00.914 6.400 - 6.427: 99.8891% ( 1) 00:15:00.914 6.453 - 6.480: 99.8939% ( 1) 00:15:00.914 6.507 - 6.533: 99.8988% ( 1) 00:15:00.914 6.533 - 6.560: 99.9036% ( 1) 00:15:00.914 6.613 - 6.640: 99.9084% ( 1) 00:15:00.914 7.147 - 7.200: 99.9132% ( 1) 00:15:00.914 7.520 - 7.573: 99.9180% ( 1) 00:15:00.914 9.600 - 9.653: 99.9229% ( 1) 00:15:00.914 12.053 - 12.107: 99.9277% ( 1) 00:15:00.914 3986.773 - 4014.080: 100.0000% ( 15) 00:15:00.914 00:15:00.914 Complete histogram 00:15:00.914 ================== 00:15:00.914 Range in us Cumulative Count 00:15:00.914 1.647 - 1.653: 0.5833% ( 121) 00:15:00.914 1.653 - 1.660: 0.9111% ( 68) 00:15:00.914 1.660 - [2024-11-20 07:29:19.077722] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.914 1.667: 0.9834% ( 15) 00:15:00.914 1.667 - 1.673: 1.1088% ( 26) 00:15:00.914 1.673 - 1.680: 1.1666% ( 12) 00:15:00.914 1.680 - 1.687: 1.1714% ( 1) 00:15:00.914 1.687 - 1.693: 1.5233% ( 73) 00:15:00.914 1.693 - 1.700: 24.1130% ( 4686) 00:15:00.914 1.700 - 1.707: 46.5580% ( 4656) 00:15:00.914 1.707 - 1.720: 68.5644% ( 4565) 00:15:00.914 1.720 - 1.733: 79.3193% ( 2231) 00:15:00.914 1.733 - 1.747: 82.9541% ( 754) 00:15:00.914 1.747 - 1.760: 84.7378% ( 370) 00:15:00.914 1.760 - 1.773: 90.2526% ( 1144) 00:15:00.914 1.773 - 1.787: 95.2661% ( 1040) 00:15:00.914 1.787 - 1.800: 98.0524% ( 578) 00:15:00.914 1.800 - 1.813: 99.1226% ( 222) 00:15:00.914 1.813 - 1.827: 99.3926% ( 56) 00:15:00.914 1.827 - 1.840: 99.4504% ( 12) 00:15:00.914 1.867 - 1.880: 99.4553% ( 1) 00:15:00.914 2.000 - 2.013: 99.4601% ( 1) 00:15:00.914 2.080 - 2.093: 99.4649% ( 1) 00:15:00.914 3.080 - 3.093: 99.4697% ( 1) 00:15:00.914 3.253 - 3.267: 99.4745% ( 1) 00:15:00.914 3.600 - 3.627: 99.4794% ( 1) 00:15:00.914 3.760 - 3.787: 99.4842% ( 1) 00:15:00.914 3.813 - 3.840: 99.4938% ( 2) 00:15:00.914 3.840 - 3.867: 99.4987% ( 1) 00:15:00.914 4.000 - 4.027: 99.5035% ( 1) 00:15:00.914 4.053 - 4.080: 99.5179% ( 3) 00:15:00.914 4.133 - 4.160: 99.5228% ( 1) 00:15:00.914 4.187 - 4.213: 99.5276% ( 1) 00:15:00.914 4.213 - 4.240: 99.5324% ( 1) 00:15:00.914 4.400 - 4.427: 99.5372% ( 1) 00:15:00.914 4.480 - 4.507: 99.5420% ( 1) 00:15:00.914 4.587 - 4.613: 99.5469% ( 1) 00:15:00.914 4.613 - 4.640: 99.5517% ( 1) 00:15:00.914 4.693 - 4.720: 99.5565% ( 1) 00:15:00.914 4.800 - 4.827: 99.5613% ( 1) 00:15:00.914 5.067 - 5.093: 99.5661% ( 1) 00:15:00.914 5.520 - 5.547: 99.5710% ( 1) 00:15:00.914 5.573 - 5.600: 99.5758% ( 1) 00:15:00.914 5.627 - 5.653: 99.5806% ( 1) 00:15:00.914 8.587 - 8.640: 99.5854% ( 1) 00:15:00.914 9.387 - 9.440: 99.5902% ( 1) 00:15:00.914 28.800 - 29.013: 99.5951% ( 1) 00:15:00.914 77.653 - 78.080: 99.5999% ( 1) 00:15:00.914 3986.773 - 4014.080: 100.0000% ( 83) 00:15:00.914 00:15:00.914 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:00.914 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:00.914 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:00.914 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:00.914 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:01.174 [ 00:15:01.174 { 00:15:01.174 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:01.174 "subtype": "Discovery", 00:15:01.174 "listen_addresses": [], 00:15:01.174 "allow_any_host": true, 00:15:01.174 "hosts": [] 00:15:01.174 }, 00:15:01.174 { 00:15:01.174 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:01.174 "subtype": "NVMe", 00:15:01.174 "listen_addresses": [ 00:15:01.174 { 00:15:01.174 "trtype": "VFIOUSER", 00:15:01.174 "adrfam": "IPv4", 00:15:01.174 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:01.174 "trsvcid": "0" 00:15:01.174 } 00:15:01.174 ], 00:15:01.174 "allow_any_host": true, 00:15:01.174 "hosts": [], 00:15:01.174 "serial_number": "SPDK1", 00:15:01.174 "model_number": "SPDK bdev Controller", 00:15:01.174 "max_namespaces": 32, 00:15:01.174 "min_cntlid": 1, 00:15:01.174 "max_cntlid": 65519, 00:15:01.174 "namespaces": [ 00:15:01.174 { 00:15:01.174 "nsid": 1, 00:15:01.174 "bdev_name": "Malloc1", 00:15:01.174 "name": "Malloc1", 00:15:01.174 "nguid": "AAFD9AA0029342819DD704D2BFA4848B", 00:15:01.174 "uuid": "aafd9aa0-0293-4281-9dd7-04d2bfa4848b" 00:15:01.174 } 00:15:01.174 ] 00:15:01.174 }, 00:15:01.174 { 00:15:01.174 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:01.174 "subtype": "NVMe", 00:15:01.174 "listen_addresses": [ 00:15:01.174 { 00:15:01.174 "trtype": "VFIOUSER", 00:15:01.174 "adrfam": "IPv4", 00:15:01.174 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:01.174 "trsvcid": "0" 00:15:01.174 } 00:15:01.174 ], 00:15:01.174 "allow_any_host": true, 00:15:01.174 "hosts": [], 00:15:01.174 "serial_number": "SPDK2", 00:15:01.174 "model_number": "SPDK bdev Controller", 00:15:01.174 "max_namespaces": 32, 00:15:01.174 "min_cntlid": 1, 00:15:01.174 "max_cntlid": 65519, 00:15:01.174 "namespaces": [ 00:15:01.174 { 00:15:01.174 "nsid": 1, 00:15:01.174 "bdev_name": "Malloc2", 00:15:01.174 "name": "Malloc2", 00:15:01.174 "nguid": "6D42464B9AF548AE8F2B6389D3B9D74B", 00:15:01.174 "uuid": "6d42464b-9af5-48ae-8f2b-6389d3b9d74b" 00:15:01.174 } 00:15:01.174 ] 00:15:01.174 } 00:15:01.174 ] 00:15:01.174 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:01.174 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:01.174 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3352975 00:15:01.174 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:01.174 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:01.174 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:01.174 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:01.174 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:01.174 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:01.174 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:01.434 [2024-11-20 07:29:19.466162] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:01.434 Malloc3 00:15:01.434 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:01.434 [2024-11-20 07:29:19.627403] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:01.694 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:01.694 Asynchronous Event Request test 00:15:01.694 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:01.694 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:01.694 Registering asynchronous event callbacks... 00:15:01.694 Starting namespace attribute notice tests for all controllers... 00:15:01.694 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:01.694 aer_cb - Changed Namespace 00:15:01.694 Cleaning up... 00:15:01.694 [ 00:15:01.694 { 00:15:01.694 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:01.694 "subtype": "Discovery", 00:15:01.694 "listen_addresses": [], 00:15:01.694 "allow_any_host": true, 00:15:01.694 "hosts": [] 00:15:01.694 }, 00:15:01.694 { 00:15:01.694 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:01.694 "subtype": "NVMe", 00:15:01.694 "listen_addresses": [ 00:15:01.694 { 00:15:01.694 "trtype": "VFIOUSER", 00:15:01.694 "adrfam": "IPv4", 00:15:01.694 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:01.694 "trsvcid": "0" 00:15:01.694 } 00:15:01.694 ], 00:15:01.694 "allow_any_host": true, 00:15:01.694 "hosts": [], 00:15:01.694 "serial_number": "SPDK1", 00:15:01.694 "model_number": "SPDK bdev Controller", 00:15:01.694 "max_namespaces": 32, 00:15:01.694 "min_cntlid": 1, 00:15:01.694 "max_cntlid": 65519, 00:15:01.694 "namespaces": [ 00:15:01.694 { 00:15:01.694 "nsid": 1, 00:15:01.694 "bdev_name": "Malloc1", 00:15:01.694 "name": "Malloc1", 00:15:01.694 "nguid": "AAFD9AA0029342819DD704D2BFA4848B", 00:15:01.694 "uuid": "aafd9aa0-0293-4281-9dd7-04d2bfa4848b" 00:15:01.694 }, 00:15:01.694 { 00:15:01.694 "nsid": 2, 00:15:01.694 "bdev_name": "Malloc3", 00:15:01.694 "name": "Malloc3", 00:15:01.694 "nguid": "60C043113EB64B548107B90514CD687A", 00:15:01.694 "uuid": "60c04311-3eb6-4b54-8107-b90514cd687a" 00:15:01.694 } 00:15:01.694 ] 00:15:01.694 }, 00:15:01.694 { 00:15:01.694 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:01.694 "subtype": "NVMe", 00:15:01.694 "listen_addresses": [ 00:15:01.694 { 00:15:01.694 "trtype": "VFIOUSER", 00:15:01.694 "adrfam": "IPv4", 00:15:01.694 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:01.694 "trsvcid": "0" 00:15:01.694 } 00:15:01.694 ], 00:15:01.694 "allow_any_host": true, 00:15:01.694 "hosts": [], 00:15:01.694 "serial_number": "SPDK2", 00:15:01.694 "model_number": "SPDK bdev Controller", 00:15:01.694 "max_namespaces": 32, 00:15:01.694 "min_cntlid": 1, 00:15:01.694 "max_cntlid": 65519, 00:15:01.694 "namespaces": [ 00:15:01.694 { 00:15:01.694 "nsid": 1, 00:15:01.694 "bdev_name": "Malloc2", 00:15:01.694 "name": "Malloc2", 00:15:01.694 "nguid": "6D42464B9AF548AE8F2B6389D3B9D74B", 00:15:01.694 "uuid": "6d42464b-9af5-48ae-8f2b-6389d3b9d74b" 00:15:01.694 } 00:15:01.694 ] 00:15:01.694 } 00:15:01.694 ] 00:15:01.694 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3352975 00:15:01.694 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:01.694 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:01.694 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:01.694 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:01.694 [2024-11-20 07:29:19.857966] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:15:01.694 [2024-11-20 07:29:19.858017] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353140 ] 00:15:01.694 [2024-11-20 07:29:19.897951] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:01.957 [2024-11-20 07:29:19.906933] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:01.957 [2024-11-20 07:29:19.906954] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7efd31e55000 00:15:01.957 [2024-11-20 07:29:19.907936] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.957 [2024-11-20 07:29:19.908938] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.957 [2024-11-20 07:29:19.909947] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.957 [2024-11-20 07:29:19.910956] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:01.957 [2024-11-20 07:29:19.911960] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:01.957 [2024-11-20 07:29:19.912969] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.957 [2024-11-20 07:29:19.913980] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:01.957 [2024-11-20 07:29:19.914984] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.957 [2024-11-20 07:29:19.915989] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:01.957 [2024-11-20 07:29:19.915996] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7efd31e4a000 00:15:01.957 [2024-11-20 07:29:19.916908] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:01.957 [2024-11-20 07:29:19.926289] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:01.957 [2024-11-20 07:29:19.926310] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:01.957 [2024-11-20 07:29:19.931378] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:01.957 [2024-11-20 07:29:19.931415] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:01.957 [2024-11-20 07:29:19.931473] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:01.957 [2024-11-20 07:29:19.931485] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:01.957 [2024-11-20 07:29:19.931489] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:01.957 [2024-11-20 07:29:19.932384] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:01.957 [2024-11-20 07:29:19.932392] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:01.957 [2024-11-20 07:29:19.932397] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:01.957 [2024-11-20 07:29:19.933389] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:01.957 [2024-11-20 07:29:19.933396] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:01.957 [2024-11-20 07:29:19.933401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:01.957 [2024-11-20 07:29:19.934392] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:01.957 [2024-11-20 07:29:19.934399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:01.957 [2024-11-20 07:29:19.935402] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:01.957 [2024-11-20 07:29:19.935409] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:01.957 [2024-11-20 07:29:19.935412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:01.957 [2024-11-20 07:29:19.935417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:01.957 [2024-11-20 07:29:19.935523] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:01.957 [2024-11-20 07:29:19.935527] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:01.957 [2024-11-20 07:29:19.935530] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:01.957 [2024-11-20 07:29:19.936407] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:01.957 [2024-11-20 07:29:19.937410] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:01.957 [2024-11-20 07:29:19.938421] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:01.957 [2024-11-20 07:29:19.939422] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:01.957 [2024-11-20 07:29:19.939450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:01.957 [2024-11-20 07:29:19.940428] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:01.957 [2024-11-20 07:29:19.940434] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:01.957 [2024-11-20 07:29:19.940438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:01.957 [2024-11-20 07:29:19.940452] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:01.957 [2024-11-20 07:29:19.940460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:01.957 [2024-11-20 07:29:19.940469] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:01.957 [2024-11-20 07:29:19.940473] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.957 [2024-11-20 07:29:19.940475] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.957 [2024-11-20 07:29:19.940485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.957 [2024-11-20 07:29:19.946753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:01.957 [2024-11-20 07:29:19.946762] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:01.957 [2024-11-20 07:29:19.946766] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:01.957 [2024-11-20 07:29:19.946769] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:01.957 [2024-11-20 07:29:19.946773] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:01.957 [2024-11-20 07:29:19.946778] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:01.957 [2024-11-20 07:29:19.946781] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:01.957 [2024-11-20 07:29:19.946785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:01.957 [2024-11-20 07:29:19.946791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:01.957 [2024-11-20 07:29:19.946799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:01.957 [2024-11-20 07:29:19.954749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:01.957 [2024-11-20 07:29:19.954758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.957 [2024-11-20 07:29:19.954765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.957 [2024-11-20 07:29:19.954771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.957 [2024-11-20 07:29:19.954777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.957 [2024-11-20 07:29:19.954780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:01.957 [2024-11-20 07:29:19.954785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:01.957 [2024-11-20 07:29:19.954792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:01.957 [2024-11-20 07:29:19.962749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:01.957 [2024-11-20 07:29:19.962756] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:01.958 [2024-11-20 07:29:19.962760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:01.958 [2024-11-20 07:29:19.962767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:01.958 [2024-11-20 07:29:19.962771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:01.958 [2024-11-20 07:29:19.962777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:01.958 [2024-11-20 07:29:19.970749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:01.958 [2024-11-20 07:29:19.970796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:01.958 [2024-11-20 07:29:19.970801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:01.958 [2024-11-20 07:29:19.970807] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:01.958 [2024-11-20 07:29:19.970811] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:01.958 [2024-11-20 07:29:19.970813] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.958 [2024-11-20 07:29:19.970818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:01.958 [2024-11-20 07:29:19.978749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:01.958 [2024-11-20 07:29:19.978757] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:01.958 [2024-11-20 07:29:19.978767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:01.958 [2024-11-20 07:29:19.978772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:01.958 [2024-11-20 07:29:19.978777] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:01.958 [2024-11-20 07:29:19.978780] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.958 [2024-11-20 07:29:19.978783] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.958 [2024-11-20 07:29:19.978787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.958 [2024-11-20 07:29:19.986749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:01.958 [2024-11-20 07:29:19.986761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:01.958 [2024-11-20 07:29:19.986767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:01.958 [2024-11-20 07:29:19.986772] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:01.958 [2024-11-20 07:29:19.986775] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.958 [2024-11-20 07:29:19.986778] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.958 [2024-11-20 07:29:19.986782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.958 [2024-11-20 07:29:19.994750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:01.958 [2024-11-20 07:29:19.994759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:01.958 [2024-11-20 07:29:19.994764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:01.958 [2024-11-20 07:29:19.994770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:01.958 [2024-11-20 07:29:19.994775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:01.958 [2024-11-20 07:29:19.994778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:01.958 [2024-11-20 07:29:19.994782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:01.958 [2024-11-20 07:29:19.994785] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:01.958 [2024-11-20 07:29:19.994789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:01.958 [2024-11-20 07:29:19.994792] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:01.958 [2024-11-20 07:29:19.994805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:01.958 [2024-11-20 07:29:20.001938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:01.958 [2024-11-20 07:29:20.001951] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:01.958 [2024-11-20 07:29:20.009751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:01.958 [2024-11-20 07:29:20.009761] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:01.958 [2024-11-20 07:29:20.017749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:01.958 [2024-11-20 07:29:20.017759] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:01.958 [2024-11-20 07:29:20.025751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:01.958 [2024-11-20 07:29:20.025763] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:01.958 [2024-11-20 07:29:20.025767] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:01.958 [2024-11-20 07:29:20.025769] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:01.958 [2024-11-20 07:29:20.025772] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:01.958 [2024-11-20 07:29:20.025774] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:01.958 [2024-11-20 07:29:20.025780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:01.958 [2024-11-20 07:29:20.025785] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:01.958 [2024-11-20 07:29:20.025788] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:01.958 [2024-11-20 07:29:20.025791] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.958 [2024-11-20 07:29:20.025797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:01.958 [2024-11-20 07:29:20.025802] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:01.958 [2024-11-20 07:29:20.025805] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.958 [2024-11-20 07:29:20.025808] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.958 [2024-11-20 07:29:20.025812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.958 [2024-11-20 07:29:20.025818] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:01.958 [2024-11-20 07:29:20.025821] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:01.958 [2024-11-20 07:29:20.025823] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.958 [2024-11-20 07:29:20.025827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:01.958 [2024-11-20 07:29:20.033750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:01.958 [2024-11-20 07:29:20.033761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:01.958 [2024-11-20 07:29:20.033769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:01.958 [2024-11-20 07:29:20.033775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:01.958 ===================================================== 00:15:01.958 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:01.958 ===================================================== 00:15:01.958 Controller Capabilities/Features 00:15:01.958 ================================ 00:15:01.958 Vendor ID: 4e58 00:15:01.958 Subsystem Vendor ID: 4e58 00:15:01.958 Serial Number: SPDK2 00:15:01.958 Model Number: SPDK bdev Controller 00:15:01.959 Firmware Version: 25.01 00:15:01.959 Recommended Arb Burst: 6 00:15:01.959 IEEE OUI Identifier: 8d 6b 50 00:15:01.959 Multi-path I/O 00:15:01.959 May have multiple subsystem ports: Yes 00:15:01.959 May have multiple controllers: Yes 00:15:01.959 Associated with SR-IOV VF: No 00:15:01.959 Max Data Transfer Size: 131072 00:15:01.959 Max Number of Namespaces: 32 00:15:01.959 Max Number of I/O Queues: 127 00:15:01.959 NVMe Specification Version (VS): 1.3 00:15:01.959 NVMe Specification Version (Identify): 1.3 00:15:01.959 Maximum Queue Entries: 256 00:15:01.959 Contiguous Queues Required: Yes 00:15:01.959 Arbitration Mechanisms Supported 00:15:01.959 Weighted Round Robin: Not Supported 00:15:01.959 Vendor Specific: Not Supported 00:15:01.959 Reset Timeout: 15000 ms 00:15:01.959 Doorbell Stride: 4 bytes 00:15:01.959 NVM Subsystem Reset: Not Supported 00:15:01.959 Command Sets Supported 00:15:01.959 NVM Command Set: Supported 00:15:01.959 Boot Partition: Not Supported 00:15:01.959 Memory Page Size Minimum: 4096 bytes 00:15:01.959 Memory Page Size Maximum: 4096 bytes 00:15:01.959 Persistent Memory Region: Not Supported 00:15:01.959 Optional Asynchronous Events Supported 00:15:01.959 Namespace Attribute Notices: Supported 00:15:01.959 Firmware Activation Notices: Not Supported 00:15:01.959 ANA Change Notices: Not Supported 00:15:01.959 PLE Aggregate Log Change Notices: Not Supported 00:15:01.959 LBA Status Info Alert Notices: Not Supported 00:15:01.959 EGE Aggregate Log Change Notices: Not Supported 00:15:01.959 Normal NVM Subsystem Shutdown event: Not Supported 00:15:01.959 Zone Descriptor Change Notices: Not Supported 00:15:01.959 Discovery Log Change Notices: Not Supported 00:15:01.959 Controller Attributes 00:15:01.959 128-bit Host Identifier: Supported 00:15:01.959 Non-Operational Permissive Mode: Not Supported 00:15:01.959 NVM Sets: Not Supported 00:15:01.959 Read Recovery Levels: Not Supported 00:15:01.959 Endurance Groups: Not Supported 00:15:01.959 Predictable Latency Mode: Not Supported 00:15:01.959 Traffic Based Keep ALive: Not Supported 00:15:01.959 Namespace Granularity: Not Supported 00:15:01.959 SQ Associations: Not Supported 00:15:01.959 UUID List: Not Supported 00:15:01.959 Multi-Domain Subsystem: Not Supported 00:15:01.959 Fixed Capacity Management: Not Supported 00:15:01.959 Variable Capacity Management: Not Supported 00:15:01.959 Delete Endurance Group: Not Supported 00:15:01.959 Delete NVM Set: Not Supported 00:15:01.959 Extended LBA Formats Supported: Not Supported 00:15:01.959 Flexible Data Placement Supported: Not Supported 00:15:01.959 00:15:01.959 Controller Memory Buffer Support 00:15:01.959 ================================ 00:15:01.959 Supported: No 00:15:01.959 00:15:01.959 Persistent Memory Region Support 00:15:01.959 ================================ 00:15:01.959 Supported: No 00:15:01.959 00:15:01.959 Admin Command Set Attributes 00:15:01.959 ============================ 00:15:01.959 Security Send/Receive: Not Supported 00:15:01.959 Format NVM: Not Supported 00:15:01.959 Firmware Activate/Download: Not Supported 00:15:01.959 Namespace Management: Not Supported 00:15:01.959 Device Self-Test: Not Supported 00:15:01.959 Directives: Not Supported 00:15:01.959 NVMe-MI: Not Supported 00:15:01.959 Virtualization Management: Not Supported 00:15:01.959 Doorbell Buffer Config: Not Supported 00:15:01.959 Get LBA Status Capability: Not Supported 00:15:01.959 Command & Feature Lockdown Capability: Not Supported 00:15:01.959 Abort Command Limit: 4 00:15:01.959 Async Event Request Limit: 4 00:15:01.959 Number of Firmware Slots: N/A 00:15:01.959 Firmware Slot 1 Read-Only: N/A 00:15:01.959 Firmware Activation Without Reset: N/A 00:15:01.959 Multiple Update Detection Support: N/A 00:15:01.959 Firmware Update Granularity: No Information Provided 00:15:01.959 Per-Namespace SMART Log: No 00:15:01.959 Asymmetric Namespace Access Log Page: Not Supported 00:15:01.959 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:01.959 Command Effects Log Page: Supported 00:15:01.959 Get Log Page Extended Data: Supported 00:15:01.959 Telemetry Log Pages: Not Supported 00:15:01.959 Persistent Event Log Pages: Not Supported 00:15:01.959 Supported Log Pages Log Page: May Support 00:15:01.959 Commands Supported & Effects Log Page: Not Supported 00:15:01.959 Feature Identifiers & Effects Log Page:May Support 00:15:01.959 NVMe-MI Commands & Effects Log Page: May Support 00:15:01.959 Data Area 4 for Telemetry Log: Not Supported 00:15:01.959 Error Log Page Entries Supported: 128 00:15:01.959 Keep Alive: Supported 00:15:01.959 Keep Alive Granularity: 10000 ms 00:15:01.959 00:15:01.959 NVM Command Set Attributes 00:15:01.959 ========================== 00:15:01.959 Submission Queue Entry Size 00:15:01.959 Max: 64 00:15:01.959 Min: 64 00:15:01.959 Completion Queue Entry Size 00:15:01.959 Max: 16 00:15:01.959 Min: 16 00:15:01.959 Number of Namespaces: 32 00:15:01.959 Compare Command: Supported 00:15:01.959 Write Uncorrectable Command: Not Supported 00:15:01.959 Dataset Management Command: Supported 00:15:01.959 Write Zeroes Command: Supported 00:15:01.959 Set Features Save Field: Not Supported 00:15:01.959 Reservations: Not Supported 00:15:01.959 Timestamp: Not Supported 00:15:01.959 Copy: Supported 00:15:01.959 Volatile Write Cache: Present 00:15:01.959 Atomic Write Unit (Normal): 1 00:15:01.959 Atomic Write Unit (PFail): 1 00:15:01.959 Atomic Compare & Write Unit: 1 00:15:01.959 Fused Compare & Write: Supported 00:15:01.959 Scatter-Gather List 00:15:01.959 SGL Command Set: Supported (Dword aligned) 00:15:01.959 SGL Keyed: Not Supported 00:15:01.959 SGL Bit Bucket Descriptor: Not Supported 00:15:01.959 SGL Metadata Pointer: Not Supported 00:15:01.959 Oversized SGL: Not Supported 00:15:01.959 SGL Metadata Address: Not Supported 00:15:01.959 SGL Offset: Not Supported 00:15:01.959 Transport SGL Data Block: Not Supported 00:15:01.959 Replay Protected Memory Block: Not Supported 00:15:01.959 00:15:01.959 Firmware Slot Information 00:15:01.959 ========================= 00:15:01.959 Active slot: 1 00:15:01.959 Slot 1 Firmware Revision: 25.01 00:15:01.959 00:15:01.959 00:15:01.959 Commands Supported and Effects 00:15:01.959 ============================== 00:15:01.959 Admin Commands 00:15:01.959 -------------- 00:15:01.959 Get Log Page (02h): Supported 00:15:01.959 Identify (06h): Supported 00:15:01.959 Abort (08h): Supported 00:15:01.959 Set Features (09h): Supported 00:15:01.959 Get Features (0Ah): Supported 00:15:01.959 Asynchronous Event Request (0Ch): Supported 00:15:01.959 Keep Alive (18h): Supported 00:15:01.959 I/O Commands 00:15:01.959 ------------ 00:15:01.959 Flush (00h): Supported LBA-Change 00:15:01.959 Write (01h): Supported LBA-Change 00:15:01.959 Read (02h): Supported 00:15:01.959 Compare (05h): Supported 00:15:01.959 Write Zeroes (08h): Supported LBA-Change 00:15:01.959 Dataset Management (09h): Supported LBA-Change 00:15:01.959 Copy (19h): Supported LBA-Change 00:15:01.959 00:15:01.959 Error Log 00:15:01.959 ========= 00:15:01.960 00:15:01.960 Arbitration 00:15:01.960 =========== 00:15:01.960 Arbitration Burst: 1 00:15:01.960 00:15:01.960 Power Management 00:15:01.960 ================ 00:15:01.960 Number of Power States: 1 00:15:01.960 Current Power State: Power State #0 00:15:01.960 Power State #0: 00:15:01.960 Max Power: 0.00 W 00:15:01.960 Non-Operational State: Operational 00:15:01.960 Entry Latency: Not Reported 00:15:01.960 Exit Latency: Not Reported 00:15:01.960 Relative Read Throughput: 0 00:15:01.960 Relative Read Latency: 0 00:15:01.960 Relative Write Throughput: 0 00:15:01.960 Relative Write Latency: 0 00:15:01.960 Idle Power: Not Reported 00:15:01.960 Active Power: Not Reported 00:15:01.960 Non-Operational Permissive Mode: Not Supported 00:15:01.960 00:15:01.960 Health Information 00:15:01.960 ================== 00:15:01.960 Critical Warnings: 00:15:01.960 Available Spare Space: OK 00:15:01.960 Temperature: OK 00:15:01.960 Device Reliability: OK 00:15:01.960 Read Only: No 00:15:01.960 Volatile Memory Backup: OK 00:15:01.960 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:01.960 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:01.960 Available Spare: 0% 00:15:01.960 Available Sp[2024-11-20 07:29:20.033851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:01.960 [2024-11-20 07:29:20.041750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:01.960 [2024-11-20 07:29:20.041772] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:01.960 [2024-11-20 07:29:20.041779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.960 [2024-11-20 07:29:20.041784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.960 [2024-11-20 07:29:20.041789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.960 [2024-11-20 07:29:20.041794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.960 [2024-11-20 07:29:20.041832] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:01.960 [2024-11-20 07:29:20.041840] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:01.960 [2024-11-20 07:29:20.042838] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:01.960 [2024-11-20 07:29:20.042878] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:01.960 [2024-11-20 07:29:20.042884] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:01.960 [2024-11-20 07:29:20.043839] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:01.960 [2024-11-20 07:29:20.043848] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:01.960 [2024-11-20 07:29:20.043897] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:01.960 [2024-11-20 07:29:20.044861] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:01.960 are Threshold: 0% 00:15:01.960 Life Percentage Used: 0% 00:15:01.960 Data Units Read: 0 00:15:01.960 Data Units Written: 0 00:15:01.960 Host Read Commands: 0 00:15:01.960 Host Write Commands: 0 00:15:01.960 Controller Busy Time: 0 minutes 00:15:01.960 Power Cycles: 0 00:15:01.960 Power On Hours: 0 hours 00:15:01.960 Unsafe Shutdowns: 0 00:15:01.960 Unrecoverable Media Errors: 0 00:15:01.960 Lifetime Error Log Entries: 0 00:15:01.960 Warning Temperature Time: 0 minutes 00:15:01.960 Critical Temperature Time: 0 minutes 00:15:01.960 00:15:01.960 Number of Queues 00:15:01.960 ================ 00:15:01.960 Number of I/O Submission Queues: 127 00:15:01.960 Number of I/O Completion Queues: 127 00:15:01.960 00:15:01.960 Active Namespaces 00:15:01.960 ================= 00:15:01.960 Namespace ID:1 00:15:01.960 Error Recovery Timeout: Unlimited 00:15:01.960 Command Set Identifier: NVM (00h) 00:15:01.960 Deallocate: Supported 00:15:01.960 Deallocated/Unwritten Error: Not Supported 00:15:01.960 Deallocated Read Value: Unknown 00:15:01.960 Deallocate in Write Zeroes: Not Supported 00:15:01.960 Deallocated Guard Field: 0xFFFF 00:15:01.960 Flush: Supported 00:15:01.960 Reservation: Supported 00:15:01.960 Namespace Sharing Capabilities: Multiple Controllers 00:15:01.960 Size (in LBAs): 131072 (0GiB) 00:15:01.960 Capacity (in LBAs): 131072 (0GiB) 00:15:01.960 Utilization (in LBAs): 131072 (0GiB) 00:15:01.960 NGUID: 6D42464B9AF548AE8F2B6389D3B9D74B 00:15:01.960 UUID: 6d42464b-9af5-48ae-8f2b-6389d3b9d74b 00:15:01.960 Thin Provisioning: Not Supported 00:15:01.960 Per-NS Atomic Units: Yes 00:15:01.960 Atomic Boundary Size (Normal): 0 00:15:01.960 Atomic Boundary Size (PFail): 0 00:15:01.960 Atomic Boundary Offset: 0 00:15:01.960 Maximum Single Source Range Length: 65535 00:15:01.960 Maximum Copy Length: 65535 00:15:01.960 Maximum Source Range Count: 1 00:15:01.960 NGUID/EUI64 Never Reused: No 00:15:01.960 Namespace Write Protected: No 00:15:01.960 Number of LBA Formats: 1 00:15:01.960 Current LBA Format: LBA Format #00 00:15:01.960 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:01.960 00:15:01.960 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:02.221 [2024-11-20 07:29:20.213785] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:07.512 Initializing NVMe Controllers 00:15:07.512 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:07.512 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:07.512 Initialization complete. Launching workers. 00:15:07.512 ======================================================== 00:15:07.512 Latency(us) 00:15:07.512 Device Information : IOPS MiB/s Average min max 00:15:07.512 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39981.54 156.18 3201.34 837.46 6823.68 00:15:07.512 ======================================================== 00:15:07.512 Total : 39981.54 156.18 3201.34 837.46 6823.68 00:15:07.512 00:15:07.512 [2024-11-20 07:29:25.322944] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:07.512 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:07.512 [2024-11-20 07:29:25.512502] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:12.803 Initializing NVMe Controllers 00:15:12.803 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:12.804 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:12.804 Initialization complete. Launching workers. 00:15:12.804 ======================================================== 00:15:12.804 Latency(us) 00:15:12.804 Device Information : IOPS MiB/s Average min max 00:15:12.804 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40023.40 156.34 3198.70 850.36 6911.83 00:15:12.804 ======================================================== 00:15:12.804 Total : 40023.40 156.34 3198.70 850.36 6911.83 00:15:12.804 00:15:12.804 [2024-11-20 07:29:30.533535] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:12.804 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:12.804 [2024-11-20 07:29:30.737751] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:18.098 [2024-11-20 07:29:35.890818] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:18.098 Initializing NVMe Controllers 00:15:18.098 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:18.098 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:18.098 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:18.098 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:18.098 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:18.098 Initialization complete. Launching workers. 00:15:18.098 Starting thread on core 2 00:15:18.098 Starting thread on core 3 00:15:18.098 Starting thread on core 1 00:15:18.098 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:18.098 [2024-11-20 07:29:36.135267] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.397 [2024-11-20 07:29:39.185298] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.397 Initializing NVMe Controllers 00:15:21.397 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.397 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.397 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:21.397 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:21.397 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:21.397 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:21.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:21.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:21.397 Initialization complete. Launching workers. 00:15:21.397 Starting thread on core 1 with urgent priority queue 00:15:21.397 Starting thread on core 2 with urgent priority queue 00:15:21.397 Starting thread on core 3 with urgent priority queue 00:15:21.397 Starting thread on core 0 with urgent priority queue 00:15:21.397 SPDK bdev Controller (SPDK2 ) core 0: 9161.00 IO/s 10.92 secs/100000 ios 00:15:21.397 SPDK bdev Controller (SPDK2 ) core 1: 8258.33 IO/s 12.11 secs/100000 ios 00:15:21.397 SPDK bdev Controller (SPDK2 ) core 2: 9402.00 IO/s 10.64 secs/100000 ios 00:15:21.397 SPDK bdev Controller (SPDK2 ) core 3: 8085.00 IO/s 12.37 secs/100000 ios 00:15:21.397 ======================================================== 00:15:21.397 00:15:21.397 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:21.397 [2024-11-20 07:29:39.427110] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.398 Initializing NVMe Controllers 00:15:21.398 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.398 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.398 Namespace ID: 1 size: 0GB 00:15:21.398 Initialization complete. 00:15:21.398 INFO: using host memory buffer for IO 00:15:21.398 Hello world! 00:15:21.398 [2024-11-20 07:29:39.439188] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.398 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:21.658 [2024-11-20 07:29:39.675399] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:22.603 Initializing NVMe Controllers 00:15:22.603 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.603 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.603 Initialization complete. Launching workers. 00:15:22.603 submit (in ns) avg, min, max = 6268.7, 2840.0, 3999535.0 00:15:22.603 complete (in ns) avg, min, max = 16726.6, 1625.0, 4993787.5 00:15:22.603 00:15:22.603 Submit histogram 00:15:22.603 ================ 00:15:22.603 Range in us Cumulative Count 00:15:22.603 2.840 - 2.853: 0.6135% ( 127) 00:15:22.603 2.853 - 2.867: 2.8983% ( 473) 00:15:22.603 2.867 - 2.880: 5.9463% ( 631) 00:15:22.603 2.880 - 2.893: 10.1536% ( 871) 00:15:22.603 2.893 - 2.907: 15.1531% ( 1035) 00:15:22.603 2.907 - 2.920: 21.5776% ( 1330) 00:15:22.603 2.920 - 2.933: 27.3983% ( 1205) 00:15:22.603 2.933 - 2.947: 33.2045% ( 1202) 00:15:22.603 2.947 - 2.960: 38.3248% ( 1060) 00:15:22.603 2.960 - 2.973: 43.0538% ( 979) 00:15:22.603 2.973 - 2.987: 48.1693% ( 1059) 00:15:22.603 2.987 - 3.000: 54.3571% ( 1281) 00:15:22.603 3.000 - 3.013: 62.9939% ( 1788) 00:15:22.603 3.013 - 3.027: 72.6548% ( 2000) 00:15:22.603 3.027 - 3.040: 80.3207% ( 1587) 00:15:22.603 3.040 - 3.053: 86.9433% ( 1371) 00:15:22.603 3.053 - 3.067: 92.0056% ( 1048) 00:15:22.603 3.067 - 3.080: 95.0681% ( 634) 00:15:22.603 3.080 - 3.093: 97.2998% ( 462) 00:15:22.603 3.093 - 3.107: 98.4929% ( 247) 00:15:22.603 3.107 - 3.120: 99.0291% ( 111) 00:15:22.603 3.120 - 3.133: 99.3141% ( 59) 00:15:22.603 3.133 - 3.147: 99.4107% ( 20) 00:15:22.603 3.147 - 3.160: 99.4928% ( 17) 00:15:22.603 3.160 - 3.173: 99.5218% ( 6) 00:15:22.603 3.173 - 3.187: 99.5459% ( 5) 00:15:22.603 3.200 - 3.213: 99.5604% ( 3) 00:15:22.603 3.213 - 3.227: 99.5701% ( 2) 00:15:22.603 3.227 - 3.240: 99.5749% ( 1) 00:15:22.603 3.253 - 3.267: 99.5846% ( 2) 00:15:22.603 3.293 - 3.307: 99.5894% ( 1) 00:15:22.603 3.307 - 3.320: 99.5942% ( 1) 00:15:22.603 3.333 - 3.347: 99.5991% ( 1) 00:15:22.603 3.347 - 3.360: 99.6136% ( 3) 00:15:22.603 3.400 - 3.413: 99.6184% ( 1) 00:15:22.603 3.413 - 3.440: 99.6232% ( 1) 00:15:22.603 3.440 - 3.467: 99.6281% ( 1) 00:15:22.603 3.520 - 3.547: 99.6329% ( 1) 00:15:22.603 3.573 - 3.600: 99.6377% ( 1) 00:15:22.603 3.600 - 3.627: 99.6425% ( 1) 00:15:22.603 3.627 - 3.653: 99.6474% ( 1) 00:15:22.603 3.813 - 3.840: 99.6522% ( 1) 00:15:22.603 4.027 - 4.053: 99.6570% ( 1) 00:15:22.603 4.080 - 4.107: 99.6619% ( 1) 00:15:22.603 4.293 - 4.320: 99.6715% ( 2) 00:15:22.603 4.400 - 4.427: 99.6764% ( 1) 00:15:22.603 4.507 - 4.533: 99.6812% ( 1) 00:15:22.603 4.560 - 4.587: 99.6860% ( 1) 00:15:22.603 4.587 - 4.613: 99.6909% ( 1) 00:15:22.603 4.693 - 4.720: 99.7005% ( 2) 00:15:22.603 4.800 - 4.827: 99.7053% ( 1) 00:15:22.603 4.907 - 4.933: 99.7102% ( 1) 00:15:22.603 4.960 - 4.987: 99.7150% ( 1) 00:15:22.603 5.013 - 5.040: 99.7198% ( 1) 00:15:22.603 5.040 - 5.067: 99.7295% ( 2) 00:15:22.603 5.067 - 5.093: 99.7343% ( 1) 00:15:22.603 5.093 - 5.120: 99.7488% ( 3) 00:15:22.603 5.173 - 5.200: 99.7536% ( 1) 00:15:22.603 5.227 - 5.253: 99.7585% ( 1) 00:15:22.603 5.387 - 5.413: 99.7633% ( 1) 00:15:22.603 5.440 - 5.467: 99.7778% ( 3) 00:15:22.603 5.520 - 5.547: 99.7826% ( 1) 00:15:22.603 5.600 - 5.627: 99.7875% ( 1) 00:15:22.603 5.733 - 5.760: 99.7923% ( 1) 00:15:22.603 5.787 - 5.813: 99.7971% ( 1) 00:15:22.603 5.867 - 5.893: 99.8020% ( 1) 00:15:22.603 5.920 - 5.947: 99.8068% ( 1) 00:15:22.603 5.947 - 5.973: 99.8116% ( 1) 00:15:22.603 6.000 - 6.027: 99.8164% ( 1) 00:15:22.603 6.053 - 6.080: 99.8213% ( 1) 00:15:22.603 6.080 - 6.107: 99.8309% ( 2) 00:15:22.603 6.133 - 6.160: 99.8406% ( 2) 00:15:22.603 6.187 - 6.213: 99.8503% ( 2) 00:15:22.603 6.240 - 6.267: 99.8551% ( 1) 00:15:22.603 6.400 - 6.427: 99.8647% ( 2) 00:15:22.603 6.427 - 6.453: 99.8696% ( 1) 00:15:22.603 6.480 - 6.507: 99.8744% ( 1) 00:15:22.603 6.587 - 6.613: 99.8792% ( 1) 00:15:22.603 6.640 - 6.667: 99.8841% ( 1) 00:15:22.603 [2024-11-20 07:29:40.766385] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.603 6.720 - 6.747: 99.8889% ( 1) 00:15:22.603 6.800 - 6.827: 99.8986% ( 2) 00:15:22.603 6.827 - 6.880: 99.9034% ( 1) 00:15:22.603 7.040 - 7.093: 99.9082% ( 1) 00:15:22.604 7.307 - 7.360: 99.9131% ( 1) 00:15:22.604 12.320 - 12.373: 99.9179% ( 1) 00:15:22.604 3986.773 - 4014.080: 100.0000% ( 17) 00:15:22.604 00:15:22.604 Complete histogram 00:15:22.604 ================== 00:15:22.604 Range in us Cumulative Count 00:15:22.604 1.620 - 1.627: 0.0048% ( 1) 00:15:22.604 1.633 - 1.640: 0.1497% ( 30) 00:15:22.604 1.640 - 1.647: 0.9178% ( 159) 00:15:22.604 1.647 - 1.653: 1.0385% ( 25) 00:15:22.604 1.653 - 1.660: 1.0869% ( 10) 00:15:22.604 1.660 - 1.667: 1.2124% ( 26) 00:15:22.604 1.667 - 1.673: 1.2656% ( 11) 00:15:22.604 1.673 - 1.680: 1.2752% ( 2) 00:15:22.604 1.680 - 1.687: 1.2994% ( 5) 00:15:22.604 1.687 - 1.693: 36.5375% ( 7295) 00:15:22.604 1.693 - 1.700: 51.0289% ( 3000) 00:15:22.604 1.700 - 1.707: 57.6031% ( 1361) 00:15:22.604 1.707 - 1.720: 74.4034% ( 3478) 00:15:22.604 1.720 - 1.733: 81.8423% ( 1540) 00:15:22.604 1.733 - 1.747: 83.5910% ( 362) 00:15:22.604 1.747 - 1.760: 87.2524% ( 758) 00:15:22.604 1.760 - 1.773: 93.0635% ( 1203) 00:15:22.604 1.773 - 1.787: 96.7443% ( 762) 00:15:22.604 1.787 - 1.800: 98.5605% ( 376) 00:15:22.604 1.800 - 1.813: 99.2078% ( 134) 00:15:22.604 1.813 - 1.827: 99.3769% ( 35) 00:15:22.604 1.827 - 1.840: 99.4107% ( 7) 00:15:22.604 1.840 - 1.853: 99.4252% ( 3) 00:15:22.604 1.853 - 1.867: 99.4300% ( 1) 00:15:22.604 1.880 - 1.893: 99.4348% ( 1) 00:15:22.604 1.920 - 1.933: 99.4397% ( 1) 00:15:22.604 1.973 - 1.987: 99.4445% ( 1) 00:15:22.604 2.347 - 2.360: 99.4493% ( 1) 00:15:22.604 3.200 - 3.213: 99.4542% ( 1) 00:15:22.604 3.413 - 3.440: 99.4590% ( 1) 00:15:22.604 3.440 - 3.467: 99.4638% ( 1) 00:15:22.604 3.600 - 3.627: 99.4735% ( 2) 00:15:22.604 3.627 - 3.653: 99.4783% ( 1) 00:15:22.604 3.760 - 3.787: 99.4831% ( 1) 00:15:22.604 3.867 - 3.893: 99.4880% ( 1) 00:15:22.604 3.920 - 3.947: 99.4928% ( 1) 00:15:22.604 4.000 - 4.027: 99.4976% ( 1) 00:15:22.604 4.053 - 4.080: 99.5025% ( 1) 00:15:22.604 4.107 - 4.133: 99.5073% ( 1) 00:15:22.604 4.240 - 4.267: 99.5121% ( 1) 00:15:22.604 4.320 - 4.347: 99.5170% ( 1) 00:15:22.604 4.400 - 4.427: 99.5218% ( 1) 00:15:22.604 4.453 - 4.480: 99.5266% ( 1) 00:15:22.604 4.640 - 4.667: 99.5314% ( 1) 00:15:22.604 4.667 - 4.693: 99.5363% ( 1) 00:15:22.604 4.720 - 4.747: 99.5411% ( 1) 00:15:22.604 4.800 - 4.827: 99.5459% ( 1) 00:15:22.604 4.827 - 4.853: 99.5508% ( 1) 00:15:22.604 4.933 - 4.960: 99.5556% ( 1) 00:15:22.604 4.987 - 5.013: 99.5604% ( 1) 00:15:22.604 5.040 - 5.067: 99.5653% ( 1) 00:15:22.604 5.067 - 5.093: 99.5701% ( 1) 00:15:22.604 5.120 - 5.147: 99.5798% ( 2) 00:15:22.604 5.200 - 5.227: 99.5846% ( 1) 00:15:22.604 5.253 - 5.280: 99.5894% ( 1) 00:15:22.604 5.440 - 5.467: 99.5942% ( 1) 00:15:22.604 5.467 - 5.493: 99.5991% ( 1) 00:15:22.604 5.573 - 5.600: 99.6039% ( 1) 00:15:22.604 6.293 - 6.320: 99.6087% ( 1) 00:15:22.604 7.627 - 7.680: 99.6136% ( 1) 00:15:22.604 8.960 - 9.013: 99.6184% ( 1) 00:15:22.604 34.347 - 34.560: 99.6232% ( 1) 00:15:22.604 2143.573 - 2157.227: 99.6281% ( 1) 00:15:22.604 3986.773 - 4014.080: 99.9952% ( 76) 00:15:22.604 4969.813 - 4997.120: 100.0000% ( 1) 00:15:22.604 00:15:22.604 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:22.604 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:22.604 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:22.604 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:22.604 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:22.864 [ 00:15:22.864 { 00:15:22.864 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:22.864 "subtype": "Discovery", 00:15:22.864 "listen_addresses": [], 00:15:22.864 "allow_any_host": true, 00:15:22.864 "hosts": [] 00:15:22.864 }, 00:15:22.864 { 00:15:22.864 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:22.864 "subtype": "NVMe", 00:15:22.864 "listen_addresses": [ 00:15:22.864 { 00:15:22.864 "trtype": "VFIOUSER", 00:15:22.864 "adrfam": "IPv4", 00:15:22.864 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:22.864 "trsvcid": "0" 00:15:22.864 } 00:15:22.864 ], 00:15:22.864 "allow_any_host": true, 00:15:22.864 "hosts": [], 00:15:22.864 "serial_number": "SPDK1", 00:15:22.864 "model_number": "SPDK bdev Controller", 00:15:22.864 "max_namespaces": 32, 00:15:22.864 "min_cntlid": 1, 00:15:22.864 "max_cntlid": 65519, 00:15:22.864 "namespaces": [ 00:15:22.864 { 00:15:22.864 "nsid": 1, 00:15:22.864 "bdev_name": "Malloc1", 00:15:22.864 "name": "Malloc1", 00:15:22.864 "nguid": "AAFD9AA0029342819DD704D2BFA4848B", 00:15:22.864 "uuid": "aafd9aa0-0293-4281-9dd7-04d2bfa4848b" 00:15:22.864 }, 00:15:22.864 { 00:15:22.864 "nsid": 2, 00:15:22.864 "bdev_name": "Malloc3", 00:15:22.864 "name": "Malloc3", 00:15:22.864 "nguid": "60C043113EB64B548107B90514CD687A", 00:15:22.865 "uuid": "60c04311-3eb6-4b54-8107-b90514cd687a" 00:15:22.865 } 00:15:22.865 ] 00:15:22.865 }, 00:15:22.865 { 00:15:22.865 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:22.865 "subtype": "NVMe", 00:15:22.865 "listen_addresses": [ 00:15:22.865 { 00:15:22.865 "trtype": "VFIOUSER", 00:15:22.865 "adrfam": "IPv4", 00:15:22.865 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:22.865 "trsvcid": "0" 00:15:22.865 } 00:15:22.865 ], 00:15:22.865 "allow_any_host": true, 00:15:22.865 "hosts": [], 00:15:22.865 "serial_number": "SPDK2", 00:15:22.865 "model_number": "SPDK bdev Controller", 00:15:22.865 "max_namespaces": 32, 00:15:22.865 "min_cntlid": 1, 00:15:22.865 "max_cntlid": 65519, 00:15:22.865 "namespaces": [ 00:15:22.865 { 00:15:22.865 "nsid": 1, 00:15:22.865 "bdev_name": "Malloc2", 00:15:22.865 "name": "Malloc2", 00:15:22.865 "nguid": "6D42464B9AF548AE8F2B6389D3B9D74B", 00:15:22.865 "uuid": "6d42464b-9af5-48ae-8f2b-6389d3b9d74b" 00:15:22.865 } 00:15:22.865 ] 00:15:22.865 } 00:15:22.865 ] 00:15:22.865 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:22.865 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3357168 00:15:22.865 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:22.865 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:22.865 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:22.865 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:22.865 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:22.865 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:22.865 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:22.865 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:23.125 [2024-11-20 07:29:41.134149] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:23.125 Malloc4 00:15:23.125 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:23.385 [2024-11-20 07:29:41.331442] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:23.385 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:23.385 Asynchronous Event Request test 00:15:23.385 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:23.385 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:23.385 Registering asynchronous event callbacks... 00:15:23.385 Starting namespace attribute notice tests for all controllers... 00:15:23.385 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:23.385 aer_cb - Changed Namespace 00:15:23.385 Cleaning up... 00:15:23.385 [ 00:15:23.385 { 00:15:23.385 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:23.385 "subtype": "Discovery", 00:15:23.385 "listen_addresses": [], 00:15:23.385 "allow_any_host": true, 00:15:23.385 "hosts": [] 00:15:23.385 }, 00:15:23.385 { 00:15:23.385 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:23.385 "subtype": "NVMe", 00:15:23.385 "listen_addresses": [ 00:15:23.385 { 00:15:23.385 "trtype": "VFIOUSER", 00:15:23.385 "adrfam": "IPv4", 00:15:23.385 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:23.385 "trsvcid": "0" 00:15:23.385 } 00:15:23.385 ], 00:15:23.385 "allow_any_host": true, 00:15:23.385 "hosts": [], 00:15:23.385 "serial_number": "SPDK1", 00:15:23.385 "model_number": "SPDK bdev Controller", 00:15:23.385 "max_namespaces": 32, 00:15:23.385 "min_cntlid": 1, 00:15:23.385 "max_cntlid": 65519, 00:15:23.385 "namespaces": [ 00:15:23.385 { 00:15:23.385 "nsid": 1, 00:15:23.385 "bdev_name": "Malloc1", 00:15:23.385 "name": "Malloc1", 00:15:23.385 "nguid": "AAFD9AA0029342819DD704D2BFA4848B", 00:15:23.385 "uuid": "aafd9aa0-0293-4281-9dd7-04d2bfa4848b" 00:15:23.385 }, 00:15:23.385 { 00:15:23.385 "nsid": 2, 00:15:23.385 "bdev_name": "Malloc3", 00:15:23.385 "name": "Malloc3", 00:15:23.385 "nguid": "60C043113EB64B548107B90514CD687A", 00:15:23.385 "uuid": "60c04311-3eb6-4b54-8107-b90514cd687a" 00:15:23.385 } 00:15:23.385 ] 00:15:23.385 }, 00:15:23.385 { 00:15:23.385 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:23.385 "subtype": "NVMe", 00:15:23.385 "listen_addresses": [ 00:15:23.385 { 00:15:23.385 "trtype": "VFIOUSER", 00:15:23.385 "adrfam": "IPv4", 00:15:23.385 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:23.385 "trsvcid": "0" 00:15:23.385 } 00:15:23.385 ], 00:15:23.385 "allow_any_host": true, 00:15:23.385 "hosts": [], 00:15:23.385 "serial_number": "SPDK2", 00:15:23.385 "model_number": "SPDK bdev Controller", 00:15:23.385 "max_namespaces": 32, 00:15:23.385 "min_cntlid": 1, 00:15:23.385 "max_cntlid": 65519, 00:15:23.385 "namespaces": [ 00:15:23.385 { 00:15:23.385 "nsid": 1, 00:15:23.385 "bdev_name": "Malloc2", 00:15:23.385 "name": "Malloc2", 00:15:23.385 "nguid": "6D42464B9AF548AE8F2B6389D3B9D74B", 00:15:23.385 "uuid": "6d42464b-9af5-48ae-8f2b-6389d3b9d74b" 00:15:23.385 }, 00:15:23.385 { 00:15:23.385 "nsid": 2, 00:15:23.385 "bdev_name": "Malloc4", 00:15:23.385 "name": "Malloc4", 00:15:23.385 "nguid": "C482BE87C68A45FDB0DD621413994F8C", 00:15:23.385 "uuid": "c482be87-c68a-45fd-b0dd-621413994f8c" 00:15:23.385 } 00:15:23.385 ] 00:15:23.385 } 00:15:23.385 ] 00:15:23.385 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3357168 00:15:23.385 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:23.385 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3348109 00:15:23.385 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 3348109 ']' 00:15:23.385 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 3348109 00:15:23.385 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:23.385 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:23.385 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3348109 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3348109' 00:15:23.645 killing process with pid 3348109 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 3348109 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 3348109 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3357298 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3357298' 00:15:23.645 Process pid: 3357298 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3357298 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 3357298 ']' 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:23.645 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:23.645 [2024-11-20 07:29:41.805383] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:23.645 [2024-11-20 07:29:41.806330] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:15:23.645 [2024-11-20 07:29:41.806375] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.906 [2024-11-20 07:29:41.893379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.906 [2024-11-20 07:29:41.925786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.906 [2024-11-20 07:29:41.925821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.906 [2024-11-20 07:29:41.925827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.906 [2024-11-20 07:29:41.925832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.906 [2024-11-20 07:29:41.925836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.906 [2024-11-20 07:29:41.927125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.906 [2024-11-20 07:29:41.927277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.906 [2024-11-20 07:29:41.927427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.906 [2024-11-20 07:29:41.927429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.906 [2024-11-20 07:29:41.980186] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:23.906 [2024-11-20 07:29:41.981212] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:23.906 [2024-11-20 07:29:41.981449] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:23.906 [2024-11-20 07:29:41.982622] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:23.906 [2024-11-20 07:29:41.982649] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:24.475 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:24.475 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:15:24.475 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:25.416 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:25.677 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:25.677 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:25.677 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:25.677 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:25.677 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:25.937 Malloc1 00:15:25.937 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:26.197 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:26.458 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:26.458 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:26.458 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:26.458 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:26.719 Malloc2 00:15:26.719 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:26.979 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:26.979 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:27.239 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:27.239 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3357298 00:15:27.239 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 3357298 ']' 00:15:27.239 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 3357298 00:15:27.239 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:27.239 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:27.239 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3357298 00:15:27.239 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:27.239 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:27.239 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3357298' 00:15:27.240 killing process with pid 3357298 00:15:27.240 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 3357298 00:15:27.240 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 3357298 00:15:27.500 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:27.500 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:27.500 00:15:27.500 real 0m50.981s 00:15:27.500 user 3m15.347s 00:15:27.500 sys 0m2.671s 00:15:27.500 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:27.500 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:27.500 ************************************ 00:15:27.500 END TEST nvmf_vfio_user 00:15:27.501 ************************************ 00:15:27.501 07:29:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:27.501 07:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:27.501 07:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:27.501 07:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.501 ************************************ 00:15:27.501 START TEST nvmf_vfio_user_nvme_compliance 00:15:27.501 ************************************ 00:15:27.501 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:27.762 * Looking for test storage... 00:15:27.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:27.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.762 --rc genhtml_branch_coverage=1 00:15:27.762 --rc genhtml_function_coverage=1 00:15:27.762 --rc genhtml_legend=1 00:15:27.762 --rc geninfo_all_blocks=1 00:15:27.762 --rc geninfo_unexecuted_blocks=1 00:15:27.762 00:15:27.762 ' 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:27.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.762 --rc genhtml_branch_coverage=1 00:15:27.762 --rc genhtml_function_coverage=1 00:15:27.762 --rc genhtml_legend=1 00:15:27.762 --rc geninfo_all_blocks=1 00:15:27.762 --rc geninfo_unexecuted_blocks=1 00:15:27.762 00:15:27.762 ' 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:27.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.762 --rc genhtml_branch_coverage=1 00:15:27.762 --rc genhtml_function_coverage=1 00:15:27.762 --rc genhtml_legend=1 00:15:27.762 --rc geninfo_all_blocks=1 00:15:27.762 --rc geninfo_unexecuted_blocks=1 00:15:27.762 00:15:27.762 ' 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:27.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.762 --rc genhtml_branch_coverage=1 00:15:27.762 --rc genhtml_function_coverage=1 00:15:27.762 --rc genhtml_legend=1 00:15:27.762 --rc geninfo_all_blocks=1 00:15:27.762 --rc geninfo_unexecuted_blocks=1 00:15:27.762 00:15:27.762 ' 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3358263 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3358263' 00:15:27.763 Process pid: 3358263 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3358263 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 3358263 ']' 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:27.763 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:27.763 [2024-11-20 07:29:45.940345] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:15:27.763 [2024-11-20 07:29:45.940397] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.023 [2024-11-20 07:29:46.026034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:28.023 [2024-11-20 07:29:46.056876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.023 [2024-11-20 07:29:46.056907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.023 [2024-11-20 07:29:46.056913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.023 [2024-11-20 07:29:46.056918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.023 [2024-11-20 07:29:46.056922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.023 [2024-11-20 07:29:46.058258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.023 [2024-11-20 07:29:46.058390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.023 [2024-11-20 07:29:46.058392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.594 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:28.594 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:15:28.594 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.976 malloc0 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.976 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:29.976 00:15:29.976 00:15:29.976 CUnit - A unit testing framework for C - Version 2.1-3 00:15:29.976 http://cunit.sourceforge.net/ 00:15:29.976 00:15:29.977 00:15:29.977 Suite: nvme_compliance 00:15:29.977 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 07:29:47.989266] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.977 [2024-11-20 07:29:47.990569] vfio_user.c: 800:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:29.977 [2024-11-20 07:29:47.990580] vfio_user.c:5503:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:29.977 [2024-11-20 07:29:47.990585] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:29.977 [2024-11-20 07:29:47.992289] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.977 passed 00:15:29.977 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 07:29:48.071807] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.977 [2024-11-20 07:29:48.074830] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.977 passed 00:15:29.977 Test: admin_identify_ns ...[2024-11-20 07:29:48.150383] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.237 [2024-11-20 07:29:48.209753] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:30.237 [2024-11-20 07:29:48.217753] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:30.237 [2024-11-20 07:29:48.238829] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.237 passed 00:15:30.237 Test: admin_get_features_mandatory_features ...[2024-11-20 07:29:48.314872] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.237 [2024-11-20 07:29:48.317889] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.237 passed 00:15:30.237 Test: admin_get_features_optional_features ...[2024-11-20 07:29:48.396397] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.237 [2024-11-20 07:29:48.399414] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.237 passed 00:15:30.498 Test: admin_set_features_number_of_queues ...[2024-11-20 07:29:48.474163] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.498 [2024-11-20 07:29:48.579840] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.498 passed 00:15:30.498 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 07:29:48.653089] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.498 [2024-11-20 07:29:48.656110] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.498 passed 00:15:30.758 Test: admin_get_log_page_with_lpo ...[2024-11-20 07:29:48.730868] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.758 [2024-11-20 07:29:48.800757] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:30.758 [2024-11-20 07:29:48.813802] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.758 passed 00:15:30.758 Test: fabric_property_get ...[2024-11-20 07:29:48.888047] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.758 [2024-11-20 07:29:48.889253] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:30.758 [2024-11-20 07:29:48.891071] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.758 passed 00:15:31.019 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 07:29:48.965518] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.019 [2024-11-20 07:29:48.966711] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:31.019 [2024-11-20 07:29:48.970554] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.019 passed 00:15:31.019 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 07:29:49.045301] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.019 [2024-11-20 07:29:49.129754] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:31.019 [2024-11-20 07:29:49.145755] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:31.019 [2024-11-20 07:29:49.150830] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.019 passed 00:15:31.019 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 07:29:49.225063] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.279 [2024-11-20 07:29:49.226262] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:31.279 [2024-11-20 07:29:49.228078] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.279 passed 00:15:31.279 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 07:29:49.302792] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.279 [2024-11-20 07:29:49.379759] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:31.280 [2024-11-20 07:29:49.403754] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:31.280 [2024-11-20 07:29:49.408818] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.280 passed 00:15:31.280 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 07:29:49.481984] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.280 [2024-11-20 07:29:49.483181] vfio_user.c:2154:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:31.280 [2024-11-20 07:29:49.483199] vfio_user.c:2148:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:31.280 [2024-11-20 07:29:49.485003] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.540 passed 00:15:31.540 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 07:29:49.561772] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.540 [2024-11-20 07:29:49.654751] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:31.540 [2024-11-20 07:29:49.662748] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:31.540 [2024-11-20 07:29:49.670754] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:31.540 [2024-11-20 07:29:49.678756] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:31.540 [2024-11-20 07:29:49.707815] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.540 passed 00:15:31.800 Test: admin_create_io_sq_verify_pc ...[2024-11-20 07:29:49.781003] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.800 [2024-11-20 07:29:49.800758] vfio_user.c:2047:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:31.800 [2024-11-20 07:29:49.818191] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.800 passed 00:15:31.800 Test: admin_create_io_qp_max_qps ...[2024-11-20 07:29:49.890648] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.184 [2024-11-20 07:29:50.980755] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:33.184 [2024-11-20 07:29:51.367195] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.445 passed 00:15:33.445 Test: admin_create_io_sq_shared_cq ...[2024-11-20 07:29:51.441098] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.445 [2024-11-20 07:29:51.576750] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:33.445 [2024-11-20 07:29:51.613803] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.445 passed 00:15:33.445 00:15:33.445 Run Summary: Type Total Ran Passed Failed Inactive 00:15:33.445 suites 1 1 n/a 0 0 00:15:33.445 tests 18 18 18 0 0 00:15:33.445 asserts 360 360 360 0 n/a 00:15:33.445 00:15:33.445 Elapsed time = 1.487 seconds 00:15:33.445 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3358263 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 3358263 ']' 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 3358263 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3358263 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3358263' 00:15:33.706 killing process with pid 3358263 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 3358263 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 3358263 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:33.706 00:15:33.706 real 0m6.188s 00:15:33.706 user 0m17.541s 00:15:33.706 sys 0m0.545s 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.706 ************************************ 00:15:33.706 END TEST nvmf_vfio_user_nvme_compliance 00:15:33.706 ************************************ 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:33.706 07:29:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:33.968 ************************************ 00:15:33.968 START TEST nvmf_vfio_user_fuzz 00:15:33.968 ************************************ 00:15:33.968 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:33.968 * Looking for test storage... 00:15:33.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:33.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.968 --rc genhtml_branch_coverage=1 00:15:33.968 --rc genhtml_function_coverage=1 00:15:33.968 --rc genhtml_legend=1 00:15:33.968 --rc geninfo_all_blocks=1 00:15:33.968 --rc geninfo_unexecuted_blocks=1 00:15:33.968 00:15:33.968 ' 00:15:33.968 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:33.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.968 --rc genhtml_branch_coverage=1 00:15:33.968 --rc genhtml_function_coverage=1 00:15:33.968 --rc genhtml_legend=1 00:15:33.968 --rc geninfo_all_blocks=1 00:15:33.968 --rc geninfo_unexecuted_blocks=1 00:15:33.968 00:15:33.968 ' 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:33.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.969 --rc genhtml_branch_coverage=1 00:15:33.969 --rc genhtml_function_coverage=1 00:15:33.969 --rc genhtml_legend=1 00:15:33.969 --rc geninfo_all_blocks=1 00:15:33.969 --rc geninfo_unexecuted_blocks=1 00:15:33.969 00:15:33.969 ' 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:33.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.969 --rc genhtml_branch_coverage=1 00:15:33.969 --rc genhtml_function_coverage=1 00:15:33.969 --rc genhtml_legend=1 00:15:33.969 --rc geninfo_all_blocks=1 00:15:33.969 --rc geninfo_unexecuted_blocks=1 00:15:33.969 00:15:33.969 ' 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:33.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3359410 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3359410' 00:15:33.969 Process pid: 3359410 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3359410 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 3359410 ']' 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:33.969 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:34.986 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:34.986 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:15:34.986 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.003 malloc0 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:36.003 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:08.126 Fuzzing completed. Shutting down the fuzz application 00:16:08.126 00:16:08.126 Dumping successful admin opcodes: 00:16:08.126 8, 9, 10, 24, 00:16:08.126 Dumping successful io opcodes: 00:16:08.126 0, 00:16:08.127 NS: 0x20000081ef00 I/O qp, Total commands completed: 1425693, total successful commands: 5599, random_seed: 908547968 00:16:08.127 NS: 0x20000081ef00 admin qp, Total commands completed: 346058, total successful commands: 2785, random_seed: 1274209728 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3359410 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 3359410 ']' 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 3359410 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3359410 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3359410' 00:16:08.127 killing process with pid 3359410 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 3359410 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 3359410 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:08.127 00:16:08.127 real 0m32.830s 00:16:08.127 user 0m37.282s 00:16:08.127 sys 0m24.049s 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:08.127 ************************************ 00:16:08.127 END TEST nvmf_vfio_user_fuzz 00:16:08.127 ************************************ 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:08.127 ************************************ 00:16:08.127 START TEST nvmf_auth_target 00:16:08.127 ************************************ 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:08.127 * Looking for test storage... 00:16:08.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:08.127 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:08.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.127 --rc genhtml_branch_coverage=1 00:16:08.127 --rc genhtml_function_coverage=1 00:16:08.127 --rc genhtml_legend=1 00:16:08.127 --rc geninfo_all_blocks=1 00:16:08.127 --rc geninfo_unexecuted_blocks=1 00:16:08.127 00:16:08.127 ' 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:08.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.127 --rc genhtml_branch_coverage=1 00:16:08.127 --rc genhtml_function_coverage=1 00:16:08.127 --rc genhtml_legend=1 00:16:08.127 --rc geninfo_all_blocks=1 00:16:08.127 --rc geninfo_unexecuted_blocks=1 00:16:08.127 00:16:08.127 ' 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:08.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.127 --rc genhtml_branch_coverage=1 00:16:08.127 --rc genhtml_function_coverage=1 00:16:08.127 --rc genhtml_legend=1 00:16:08.127 --rc geninfo_all_blocks=1 00:16:08.127 --rc geninfo_unexecuted_blocks=1 00:16:08.127 00:16:08.127 ' 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:08.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.127 --rc genhtml_branch_coverage=1 00:16:08.127 --rc genhtml_function_coverage=1 00:16:08.127 --rc genhtml_legend=1 00:16:08.127 --rc geninfo_all_blocks=1 00:16:08.127 --rc geninfo_unexecuted_blocks=1 00:16:08.127 00:16:08.127 ' 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.127 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:08.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:08.128 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:14.720 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:14.720 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:14.720 Found net devices under 0000:31:00.0: cvl_0_0 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:14.720 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:14.721 Found net devices under 0000:31:00.1: cvl_0_1 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:14.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.718 ms 00:16:14.721 00:16:14.721 --- 10.0.0.2 ping statistics --- 00:16:14.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.721 rtt min/avg/max/mdev = 0.718/0.718/0.718/0.000 ms 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:14.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:16:14.721 00:16:14.721 --- 10.0.0.1 ping statistics --- 00:16:14.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.721 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3370129 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3370129 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3370129 ']' 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:14.721 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.665 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:15.665 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:15.665 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:15.665 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:15.665 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.665 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.665 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3370297 00:16:15.665 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:15.665 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:15.665 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:15.665 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:15.665 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:15.665 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:15.665 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:15.665 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:15.665 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b1e56928d117bc43b63dc83cc14f8f9e6936ef3392e6be52 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Ogh 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b1e56928d117bc43b63dc83cc14f8f9e6936ef3392e6be52 0 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b1e56928d117bc43b63dc83cc14f8f9e6936ef3392e6be52 0 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b1e56928d117bc43b63dc83cc14f8f9e6936ef3392e6be52 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Ogh 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Ogh 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Ogh 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=330984354d79954d5a880560ce1d6a1a6e186a9167c78ee37f68ad80d4755557 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.6Ic 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 330984354d79954d5a880560ce1d6a1a6e186a9167c78ee37f68ad80d4755557 3 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 330984354d79954d5a880560ce1d6a1a6e186a9167c78ee37f68ad80d4755557 3 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=330984354d79954d5a880560ce1d6a1a6e186a9167c78ee37f68ad80d4755557 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.6Ic 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.6Ic 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.6Ic 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dc9102b83ce31528eadb1b972299ffc5 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.WEG 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dc9102b83ce31528eadb1b972299ffc5 1 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dc9102b83ce31528eadb1b972299ffc5 1 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dc9102b83ce31528eadb1b972299ffc5 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.WEG 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.WEG 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.WEG 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c17b22451b13465f5f80909ca0580a52d7ded56c9a544510 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Xim 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c17b22451b13465f5f80909ca0580a52d7ded56c9a544510 2 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c17b22451b13465f5f80909ca0580a52d7ded56c9a544510 2 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c17b22451b13465f5f80909ca0580a52d7ded56c9a544510 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:15.666 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Xim 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Xim 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Xim 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=27d90c62153acdc570fb5332455f992a9c1e2850ef91545c 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yNu 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 27d90c62153acdc570fb5332455f992a9c1e2850ef91545c 2 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 27d90c62153acdc570fb5332455f992a9c1e2850ef91545c 2 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=27d90c62153acdc570fb5332455f992a9c1e2850ef91545c 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yNu 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yNu 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.yNu 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d1fa455d4106befa5128b9bb672310e3 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:15.929 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.EWu 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d1fa455d4106befa5128b9bb672310e3 1 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d1fa455d4106befa5128b9bb672310e3 1 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d1fa455d4106befa5128b9bb672310e3 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.EWu 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.EWu 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.EWu 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=85134acde4c017ec92c22ed74c9b2f3c0338ae24ca9a0628e46c27fe9e652875 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Xne 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 85134acde4c017ec92c22ed74c9b2f3c0338ae24ca9a0628e46c27fe9e652875 3 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 85134acde4c017ec92c22ed74c9b2f3c0338ae24ca9a0628e46c27fe9e652875 3 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=85134acde4c017ec92c22ed74c9b2f3c0338ae24ca9a0628e46c27fe9e652875 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Xne 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Xne 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Xne 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3370129 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3370129 ']' 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:15.929 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.191 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:16.191 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:16.191 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3370297 /var/tmp/host.sock 00:16:16.191 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3370297 ']' 00:16:16.191 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:16:16.191 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:16.191 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:16.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:16.191 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:16.191 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.453 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:16.453 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:16.453 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:16.453 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.453 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.453 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.453 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:16.453 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ogh 00:16:16.453 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.453 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.453 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.453 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Ogh 00:16:16.453 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Ogh 00:16:16.714 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.6Ic ]] 00:16:16.714 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6Ic 00:16:16.714 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.714 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.714 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.714 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6Ic 00:16:16.714 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6Ic 00:16:16.975 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:16.975 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.WEG 00:16:16.975 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.975 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.975 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.975 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.WEG 00:16:16.975 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.WEG 00:16:16.975 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Xim ]] 00:16:16.975 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Xim 00:16:16.975 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.975 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.975 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.975 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Xim 00:16:16.975 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Xim 00:16:17.236 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:17.236 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yNu 00:16:17.236 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.236 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.236 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.236 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.yNu 00:16:17.236 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.yNu 00:16:17.496 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.EWu ]] 00:16:17.496 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EWu 00:16:17.496 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.496 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.496 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.496 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EWu 00:16:17.496 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EWu 00:16:17.756 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:17.756 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Xne 00:16:17.756 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.756 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.756 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.756 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Xne 00:16:17.756 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Xne 00:16:18.017 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:18.018 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:18.018 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.018 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.018 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:18.018 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:18.018 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:18.018 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.018 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.018 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:18.018 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:18.018 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.018 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.018 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.018 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.018 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.018 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.018 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.018 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.279 00:16:18.279 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.279 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.279 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.541 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.541 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.541 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.541 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.541 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.541 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.541 { 00:16:18.541 "cntlid": 1, 00:16:18.541 "qid": 0, 00:16:18.541 "state": "enabled", 00:16:18.541 "thread": "nvmf_tgt_poll_group_000", 00:16:18.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:18.541 "listen_address": { 00:16:18.541 "trtype": "TCP", 00:16:18.541 "adrfam": "IPv4", 00:16:18.541 "traddr": "10.0.0.2", 00:16:18.541 "trsvcid": "4420" 00:16:18.541 }, 00:16:18.541 "peer_address": { 00:16:18.541 "trtype": "TCP", 00:16:18.541 "adrfam": "IPv4", 00:16:18.541 "traddr": "10.0.0.1", 00:16:18.541 "trsvcid": "35500" 00:16:18.541 }, 00:16:18.541 "auth": { 00:16:18.541 "state": "completed", 00:16:18.541 "digest": "sha256", 00:16:18.541 "dhgroup": "null" 00:16:18.541 } 00:16:18.541 } 00:16:18.541 ]' 00:16:18.541 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.541 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.541 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.541 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:18.541 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.802 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.802 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.802 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.802 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:16:18.802 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.744 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.745 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.745 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.005 00:16:20.005 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.005 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.005 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.266 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.266 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.266 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.266 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.266 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.266 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.266 { 00:16:20.266 "cntlid": 3, 00:16:20.266 "qid": 0, 00:16:20.266 "state": "enabled", 00:16:20.266 "thread": "nvmf_tgt_poll_group_000", 00:16:20.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:20.266 "listen_address": { 00:16:20.266 "trtype": "TCP", 00:16:20.266 "adrfam": "IPv4", 00:16:20.266 "traddr": "10.0.0.2", 00:16:20.266 "trsvcid": "4420" 00:16:20.266 }, 00:16:20.266 "peer_address": { 00:16:20.266 "trtype": "TCP", 00:16:20.266 "adrfam": "IPv4", 00:16:20.266 "traddr": "10.0.0.1", 00:16:20.266 "trsvcid": "35520" 00:16:20.266 }, 00:16:20.266 "auth": { 00:16:20.266 "state": "completed", 00:16:20.266 "digest": "sha256", 00:16:20.266 "dhgroup": "null" 00:16:20.266 } 00:16:20.266 } 00:16:20.266 ]' 00:16:20.266 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.266 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.266 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.266 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:20.266 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.266 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.266 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.266 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.526 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:16:20.526 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:16:21.096 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.096 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:21.096 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.096 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.096 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.096 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.096 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.096 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.356 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:21.356 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.356 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.356 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.356 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:21.356 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.356 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.356 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.356 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.356 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.356 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.356 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.356 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.617 00:16:21.617 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.617 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.617 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.877 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.877 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.877 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.877 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.877 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.878 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.878 { 00:16:21.878 "cntlid": 5, 00:16:21.878 "qid": 0, 00:16:21.878 "state": "enabled", 00:16:21.878 "thread": "nvmf_tgt_poll_group_000", 00:16:21.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:21.878 "listen_address": { 00:16:21.878 "trtype": "TCP", 00:16:21.878 "adrfam": "IPv4", 00:16:21.878 "traddr": "10.0.0.2", 00:16:21.878 "trsvcid": "4420" 00:16:21.878 }, 00:16:21.878 "peer_address": { 00:16:21.878 "trtype": "TCP", 00:16:21.878 "adrfam": "IPv4", 00:16:21.878 "traddr": "10.0.0.1", 00:16:21.878 "trsvcid": "35534" 00:16:21.878 }, 00:16:21.878 "auth": { 00:16:21.878 "state": "completed", 00:16:21.878 "digest": "sha256", 00:16:21.878 "dhgroup": "null" 00:16:21.878 } 00:16:21.878 } 00:16:21.878 ]' 00:16:21.878 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.878 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.878 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.878 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:21.878 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.878 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.878 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.878 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.138 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:16:22.138 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:16:22.710 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.710 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:22.710 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.710 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.710 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.710 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.710 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:22.710 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:22.971 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:22.971 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.971 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.971 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:22.971 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:22.971 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.971 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:22.971 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.971 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.971 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.971 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:22.971 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.971 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.232 00:16:23.232 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.232 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.232 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.493 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.493 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.493 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.493 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.493 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.493 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.493 { 00:16:23.493 "cntlid": 7, 00:16:23.493 "qid": 0, 00:16:23.493 "state": "enabled", 00:16:23.493 "thread": "nvmf_tgt_poll_group_000", 00:16:23.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:23.493 "listen_address": { 00:16:23.493 "trtype": "TCP", 00:16:23.493 "adrfam": "IPv4", 00:16:23.493 "traddr": "10.0.0.2", 00:16:23.493 "trsvcid": "4420" 00:16:23.493 }, 00:16:23.493 "peer_address": { 00:16:23.493 "trtype": "TCP", 00:16:23.493 "adrfam": "IPv4", 00:16:23.493 "traddr": "10.0.0.1", 00:16:23.493 "trsvcid": "35570" 00:16:23.493 }, 00:16:23.493 "auth": { 00:16:23.493 "state": "completed", 00:16:23.493 "digest": "sha256", 00:16:23.493 "dhgroup": "null" 00:16:23.493 } 00:16:23.493 } 00:16:23.493 ]' 00:16:23.493 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.493 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.493 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.493 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:23.493 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.493 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.493 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.493 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.753 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:16:23.753 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:16:24.325 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.325 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:24.325 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.325 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.325 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.325 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.325 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.325 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:24.325 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:24.586 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:24.586 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.586 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.586 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:24.586 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:24.586 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.586 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.586 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.586 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.586 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.586 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.586 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.586 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.846 00:16:24.846 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.846 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.846 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.846 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.846 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.846 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.846 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.107 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.107 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.107 { 00:16:25.107 "cntlid": 9, 00:16:25.107 "qid": 0, 00:16:25.107 "state": "enabled", 00:16:25.107 "thread": "nvmf_tgt_poll_group_000", 00:16:25.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:25.107 "listen_address": { 00:16:25.107 "trtype": "TCP", 00:16:25.107 "adrfam": "IPv4", 00:16:25.107 "traddr": "10.0.0.2", 00:16:25.107 "trsvcid": "4420" 00:16:25.107 }, 00:16:25.107 "peer_address": { 00:16:25.107 "trtype": "TCP", 00:16:25.107 "adrfam": "IPv4", 00:16:25.107 "traddr": "10.0.0.1", 00:16:25.107 "trsvcid": "40658" 00:16:25.107 }, 00:16:25.107 "auth": { 00:16:25.107 "state": "completed", 00:16:25.107 "digest": "sha256", 00:16:25.107 "dhgroup": "ffdhe2048" 00:16:25.107 } 00:16:25.107 } 00:16:25.107 ]' 00:16:25.107 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.107 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.107 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.107 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.107 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.107 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.107 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.107 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.368 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:16:25.368 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:16:25.939 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.939 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:25.939 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.939 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.939 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.939 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.939 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:25.939 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.199 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:26.199 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.200 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.200 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:26.200 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:26.200 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.200 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.200 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.200 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.200 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.200 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.200 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.200 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.460 00:16:26.460 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.460 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.460 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.460 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.460 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.460 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.460 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.460 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.460 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.460 { 00:16:26.460 "cntlid": 11, 00:16:26.460 "qid": 0, 00:16:26.460 "state": "enabled", 00:16:26.460 "thread": "nvmf_tgt_poll_group_000", 00:16:26.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:26.460 "listen_address": { 00:16:26.460 "trtype": "TCP", 00:16:26.460 "adrfam": "IPv4", 00:16:26.460 "traddr": "10.0.0.2", 00:16:26.460 "trsvcid": "4420" 00:16:26.460 }, 00:16:26.460 "peer_address": { 00:16:26.460 "trtype": "TCP", 00:16:26.460 "adrfam": "IPv4", 00:16:26.460 "traddr": "10.0.0.1", 00:16:26.460 "trsvcid": "40674" 00:16:26.460 }, 00:16:26.460 "auth": { 00:16:26.460 "state": "completed", 00:16:26.460 "digest": "sha256", 00:16:26.460 "dhgroup": "ffdhe2048" 00:16:26.460 } 00:16:26.460 } 00:16:26.460 ]' 00:16:26.460 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.720 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.720 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.720 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:26.720 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.720 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.720 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.720 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.981 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:16:26.981 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:16:27.554 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.554 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:27.554 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.554 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.554 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.554 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.554 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.554 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.814 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:27.814 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.814 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.814 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:27.814 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.814 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.814 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.814 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.814 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.814 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.815 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.815 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.815 07:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.075 00:16:28.075 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.075 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.075 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.075 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.075 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.075 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.075 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.335 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.335 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.335 { 00:16:28.335 "cntlid": 13, 00:16:28.335 "qid": 0, 00:16:28.335 "state": "enabled", 00:16:28.335 "thread": "nvmf_tgt_poll_group_000", 00:16:28.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:28.335 "listen_address": { 00:16:28.335 "trtype": "TCP", 00:16:28.335 "adrfam": "IPv4", 00:16:28.335 "traddr": "10.0.0.2", 00:16:28.335 "trsvcid": "4420" 00:16:28.335 }, 00:16:28.335 "peer_address": { 00:16:28.335 "trtype": "TCP", 00:16:28.335 "adrfam": "IPv4", 00:16:28.335 "traddr": "10.0.0.1", 00:16:28.335 "trsvcid": "40702" 00:16:28.335 }, 00:16:28.335 "auth": { 00:16:28.335 "state": "completed", 00:16:28.335 "digest": "sha256", 00:16:28.335 "dhgroup": "ffdhe2048" 00:16:28.335 } 00:16:28.335 } 00:16:28.335 ]' 00:16:28.335 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.335 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.335 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.335 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.335 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.335 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.335 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.335 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.595 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:16:28.595 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:16:29.165 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.165 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:29.165 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.165 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.165 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.165 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.165 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.165 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.425 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:29.425 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.425 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.425 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.425 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:29.425 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.425 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:29.425 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.425 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.425 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.425 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:29.425 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.425 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.685 00:16:29.685 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.685 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.685 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.945 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.945 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.945 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.945 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.945 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.945 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.945 { 00:16:29.945 "cntlid": 15, 00:16:29.945 "qid": 0, 00:16:29.945 "state": "enabled", 00:16:29.945 "thread": "nvmf_tgt_poll_group_000", 00:16:29.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:29.945 "listen_address": { 00:16:29.945 "trtype": "TCP", 00:16:29.945 "adrfam": "IPv4", 00:16:29.945 "traddr": "10.0.0.2", 00:16:29.945 "trsvcid": "4420" 00:16:29.945 }, 00:16:29.945 "peer_address": { 00:16:29.945 "trtype": "TCP", 00:16:29.945 "adrfam": "IPv4", 00:16:29.945 "traddr": "10.0.0.1", 00:16:29.945 "trsvcid": "40736" 00:16:29.945 }, 00:16:29.945 "auth": { 00:16:29.945 "state": "completed", 00:16:29.945 "digest": "sha256", 00:16:29.945 "dhgroup": "ffdhe2048" 00:16:29.945 } 00:16:29.945 } 00:16:29.945 ]' 00:16:29.945 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.945 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.945 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.945 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.945 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.945 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.945 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.945 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.206 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:16:30.206 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:16:30.778 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.778 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:30.778 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.778 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.778 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.778 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.778 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.778 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:30.778 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:31.038 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:31.038 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.038 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.038 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:31.038 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:31.038 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.038 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.038 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.038 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.038 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.038 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.038 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.038 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.299 00:16:31.299 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.299 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.299 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.299 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.299 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.299 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.299 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.559 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.559 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.559 { 00:16:31.559 "cntlid": 17, 00:16:31.559 "qid": 0, 00:16:31.559 "state": "enabled", 00:16:31.559 "thread": "nvmf_tgt_poll_group_000", 00:16:31.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:31.559 "listen_address": { 00:16:31.559 "trtype": "TCP", 00:16:31.559 "adrfam": "IPv4", 00:16:31.559 "traddr": "10.0.0.2", 00:16:31.559 "trsvcid": "4420" 00:16:31.559 }, 00:16:31.559 "peer_address": { 00:16:31.559 "trtype": "TCP", 00:16:31.559 "adrfam": "IPv4", 00:16:31.559 "traddr": "10.0.0.1", 00:16:31.559 "trsvcid": "40746" 00:16:31.559 }, 00:16:31.559 "auth": { 00:16:31.559 "state": "completed", 00:16:31.559 "digest": "sha256", 00:16:31.559 "dhgroup": "ffdhe3072" 00:16:31.559 } 00:16:31.559 } 00:16:31.559 ]' 00:16:31.559 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.559 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.559 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.559 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:31.559 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.559 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.559 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.559 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.820 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:16:31.820 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:16:32.392 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.392 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:32.392 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.392 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.392 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.392 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.392 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:32.392 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:32.652 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:32.652 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.652 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.652 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:32.652 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:32.652 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.652 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.652 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.652 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.652 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.652 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.652 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.652 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.913 00:16:32.913 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.913 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.913 07:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.913 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.913 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.913 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.913 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.913 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.913 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.913 { 00:16:32.913 "cntlid": 19, 00:16:32.913 "qid": 0, 00:16:32.913 "state": "enabled", 00:16:32.913 "thread": "nvmf_tgt_poll_group_000", 00:16:32.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:32.913 "listen_address": { 00:16:32.913 "trtype": "TCP", 00:16:32.913 "adrfam": "IPv4", 00:16:32.913 "traddr": "10.0.0.2", 00:16:32.913 "trsvcid": "4420" 00:16:32.913 }, 00:16:32.913 "peer_address": { 00:16:32.913 "trtype": "TCP", 00:16:32.913 "adrfam": "IPv4", 00:16:32.913 "traddr": "10.0.0.1", 00:16:32.913 "trsvcid": "40768" 00:16:32.913 }, 00:16:32.913 "auth": { 00:16:32.913 "state": "completed", 00:16:32.913 "digest": "sha256", 00:16:32.913 "dhgroup": "ffdhe3072" 00:16:32.913 } 00:16:32.913 } 00:16:32.913 ]' 00:16:32.913 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.174 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.174 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.174 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:33.174 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.174 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.174 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.174 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.434 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:16:33.434 07:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:16:34.004 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.004 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:34.004 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.004 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.004 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.004 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.004 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:34.004 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:34.264 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:34.264 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.265 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.265 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:34.265 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:34.265 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.265 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.265 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.265 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.265 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.265 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.265 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.265 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.524 00:16:34.524 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.524 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.524 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.855 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.855 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.855 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.855 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.855 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.855 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.855 { 00:16:34.855 "cntlid": 21, 00:16:34.855 "qid": 0, 00:16:34.855 "state": "enabled", 00:16:34.855 "thread": "nvmf_tgt_poll_group_000", 00:16:34.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:34.855 "listen_address": { 00:16:34.855 "trtype": "TCP", 00:16:34.855 "adrfam": "IPv4", 00:16:34.855 "traddr": "10.0.0.2", 00:16:34.855 "trsvcid": "4420" 00:16:34.855 }, 00:16:34.855 "peer_address": { 00:16:34.855 "trtype": "TCP", 00:16:34.855 "adrfam": "IPv4", 00:16:34.855 "traddr": "10.0.0.1", 00:16:34.855 "trsvcid": "40804" 00:16:34.855 }, 00:16:34.855 "auth": { 00:16:34.855 "state": "completed", 00:16:34.855 "digest": "sha256", 00:16:34.855 "dhgroup": "ffdhe3072" 00:16:34.855 } 00:16:34.855 } 00:16:34.855 ]' 00:16:34.855 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.855 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.855 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.855 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.855 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.855 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.855 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.855 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.114 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:16:35.114 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:16:35.683 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.683 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:35.683 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.683 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.683 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.683 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.683 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.683 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.944 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:35.944 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.944 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.944 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.944 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:35.944 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.944 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:35.944 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.944 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.944 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.944 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:35.944 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.944 07:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.204 00:16:36.204 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.204 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.204 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.204 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.465 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.465 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.465 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.465 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.465 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.465 { 00:16:36.465 "cntlid": 23, 00:16:36.465 "qid": 0, 00:16:36.465 "state": "enabled", 00:16:36.465 "thread": "nvmf_tgt_poll_group_000", 00:16:36.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:36.465 "listen_address": { 00:16:36.465 "trtype": "TCP", 00:16:36.465 "adrfam": "IPv4", 00:16:36.465 "traddr": "10.0.0.2", 00:16:36.465 "trsvcid": "4420" 00:16:36.465 }, 00:16:36.465 "peer_address": { 00:16:36.466 "trtype": "TCP", 00:16:36.466 "adrfam": "IPv4", 00:16:36.466 "traddr": "10.0.0.1", 00:16:36.466 "trsvcid": "52296" 00:16:36.466 }, 00:16:36.466 "auth": { 00:16:36.466 "state": "completed", 00:16:36.466 "digest": "sha256", 00:16:36.466 "dhgroup": "ffdhe3072" 00:16:36.466 } 00:16:36.466 } 00:16:36.466 ]' 00:16:36.466 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.466 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.466 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.466 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.466 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.466 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.466 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.466 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.729 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:16:36.729 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:16:37.302 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.302 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:37.302 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.302 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.302 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.302 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.302 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.302 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:37.302 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:37.564 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:37.564 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.564 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.564 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:37.564 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:37.564 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.564 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.564 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.564 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.564 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.564 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.564 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.564 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.825 00:16:37.825 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.825 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.825 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.086 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.086 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.086 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.086 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.086 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.086 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.086 { 00:16:38.086 "cntlid": 25, 00:16:38.086 "qid": 0, 00:16:38.086 "state": "enabled", 00:16:38.086 "thread": "nvmf_tgt_poll_group_000", 00:16:38.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:38.086 "listen_address": { 00:16:38.086 "trtype": "TCP", 00:16:38.086 "adrfam": "IPv4", 00:16:38.086 "traddr": "10.0.0.2", 00:16:38.086 "trsvcid": "4420" 00:16:38.086 }, 00:16:38.086 "peer_address": { 00:16:38.086 "trtype": "TCP", 00:16:38.086 "adrfam": "IPv4", 00:16:38.086 "traddr": "10.0.0.1", 00:16:38.086 "trsvcid": "52316" 00:16:38.086 }, 00:16:38.086 "auth": { 00:16:38.086 "state": "completed", 00:16:38.086 "digest": "sha256", 00:16:38.086 "dhgroup": "ffdhe4096" 00:16:38.086 } 00:16:38.086 } 00:16:38.086 ]' 00:16:38.086 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.086 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.086 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.086 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.086 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.086 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.086 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.086 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.347 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:16:38.347 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:16:38.919 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.919 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:38.919 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.919 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.919 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.919 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.919 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:38.919 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:39.179 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:39.179 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.179 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.179 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:39.179 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:39.179 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.179 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.179 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.180 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.180 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.180 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.180 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.180 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.440 00:16:39.440 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.440 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.440 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.440 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.440 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.440 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.440 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.440 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.440 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.440 { 00:16:39.440 "cntlid": 27, 00:16:39.440 "qid": 0, 00:16:39.440 "state": "enabled", 00:16:39.440 "thread": "nvmf_tgt_poll_group_000", 00:16:39.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:39.440 "listen_address": { 00:16:39.440 "trtype": "TCP", 00:16:39.440 "adrfam": "IPv4", 00:16:39.440 "traddr": "10.0.0.2", 00:16:39.440 "trsvcid": "4420" 00:16:39.440 }, 00:16:39.440 "peer_address": { 00:16:39.440 "trtype": "TCP", 00:16:39.440 "adrfam": "IPv4", 00:16:39.440 "traddr": "10.0.0.1", 00:16:39.440 "trsvcid": "52354" 00:16:39.440 }, 00:16:39.440 "auth": { 00:16:39.440 "state": "completed", 00:16:39.440 "digest": "sha256", 00:16:39.440 "dhgroup": "ffdhe4096" 00:16:39.440 } 00:16:39.440 } 00:16:39.440 ]' 00:16:39.440 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.700 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.700 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.700 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:39.700 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.700 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.700 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.700 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.961 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:16:39.961 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:16:40.533 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.533 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:40.533 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.533 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.533 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.533 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.533 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.533 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.794 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:40.794 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.794 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.794 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:40.794 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:40.795 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.795 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.795 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.795 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.795 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.795 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.795 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.795 07:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.055 00:16:41.055 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.055 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.055 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.055 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.055 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.055 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.055 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.315 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.315 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.315 { 00:16:41.315 "cntlid": 29, 00:16:41.315 "qid": 0, 00:16:41.315 "state": "enabled", 00:16:41.315 "thread": "nvmf_tgt_poll_group_000", 00:16:41.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:41.315 "listen_address": { 00:16:41.315 "trtype": "TCP", 00:16:41.315 "adrfam": "IPv4", 00:16:41.315 "traddr": "10.0.0.2", 00:16:41.315 "trsvcid": "4420" 00:16:41.315 }, 00:16:41.315 "peer_address": { 00:16:41.315 "trtype": "TCP", 00:16:41.315 "adrfam": "IPv4", 00:16:41.315 "traddr": "10.0.0.1", 00:16:41.315 "trsvcid": "52384" 00:16:41.315 }, 00:16:41.315 "auth": { 00:16:41.315 "state": "completed", 00:16:41.315 "digest": "sha256", 00:16:41.315 "dhgroup": "ffdhe4096" 00:16:41.315 } 00:16:41.315 } 00:16:41.315 ]' 00:16:41.315 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.315 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.315 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.315 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:41.315 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.315 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.315 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.315 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.576 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:16:41.576 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:16:42.147 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.147 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:42.147 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.147 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.147 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.147 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.147 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.147 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.414 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:42.414 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.414 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.414 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:42.414 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:42.414 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.414 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:42.414 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.414 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.414 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.414 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:42.414 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.414 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.738 00:16:42.738 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.738 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.738 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.738 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.738 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.738 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.738 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.738 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.738 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.738 { 00:16:42.738 "cntlid": 31, 00:16:42.738 "qid": 0, 00:16:42.738 "state": "enabled", 00:16:42.738 "thread": "nvmf_tgt_poll_group_000", 00:16:42.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:42.738 "listen_address": { 00:16:42.738 "trtype": "TCP", 00:16:42.738 "adrfam": "IPv4", 00:16:42.738 "traddr": "10.0.0.2", 00:16:42.738 "trsvcid": "4420" 00:16:42.738 }, 00:16:42.738 "peer_address": { 00:16:42.738 "trtype": "TCP", 00:16:42.738 "adrfam": "IPv4", 00:16:42.738 "traddr": "10.0.0.1", 00:16:42.738 "trsvcid": "52394" 00:16:42.738 }, 00:16:42.738 "auth": { 00:16:42.738 "state": "completed", 00:16:42.738 "digest": "sha256", 00:16:42.738 "dhgroup": "ffdhe4096" 00:16:42.738 } 00:16:42.738 } 00:16:42.738 ]' 00:16:42.738 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.738 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.738 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.031 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.031 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.031 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.031 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.031 07:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.031 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:16:43.031 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:16:43.601 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.863 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:43.863 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.863 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.863 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.863 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.863 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.863 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:43.863 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:43.863 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:43.863 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.863 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.863 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:43.863 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.863 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.863 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.863 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.863 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.863 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.863 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.863 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.863 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.434 00:16:44.434 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.434 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.434 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.434 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.434 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.434 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.434 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.434 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.434 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.434 { 00:16:44.434 "cntlid": 33, 00:16:44.434 "qid": 0, 00:16:44.434 "state": "enabled", 00:16:44.434 "thread": "nvmf_tgt_poll_group_000", 00:16:44.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:44.434 "listen_address": { 00:16:44.434 "trtype": "TCP", 00:16:44.434 "adrfam": "IPv4", 00:16:44.434 "traddr": "10.0.0.2", 00:16:44.434 "trsvcid": "4420" 00:16:44.434 }, 00:16:44.434 "peer_address": { 00:16:44.434 "trtype": "TCP", 00:16:44.434 "adrfam": "IPv4", 00:16:44.434 "traddr": "10.0.0.1", 00:16:44.434 "trsvcid": "52418" 00:16:44.434 }, 00:16:44.434 "auth": { 00:16:44.434 "state": "completed", 00:16:44.434 "digest": "sha256", 00:16:44.434 "dhgroup": "ffdhe6144" 00:16:44.434 } 00:16:44.434 } 00:16:44.434 ]' 00:16:44.434 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.434 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.434 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.695 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.695 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.695 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.695 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.695 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.695 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:16:44.695 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.635 07:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.895 00:16:46.155 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.155 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.155 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.155 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.155 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.155 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.155 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.155 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.155 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.155 { 00:16:46.155 "cntlid": 35, 00:16:46.155 "qid": 0, 00:16:46.155 "state": "enabled", 00:16:46.155 "thread": "nvmf_tgt_poll_group_000", 00:16:46.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:46.155 "listen_address": { 00:16:46.155 "trtype": "TCP", 00:16:46.155 "adrfam": "IPv4", 00:16:46.155 "traddr": "10.0.0.2", 00:16:46.155 "trsvcid": "4420" 00:16:46.155 }, 00:16:46.155 "peer_address": { 00:16:46.155 "trtype": "TCP", 00:16:46.155 "adrfam": "IPv4", 00:16:46.155 "traddr": "10.0.0.1", 00:16:46.155 "trsvcid": "60644" 00:16:46.155 }, 00:16:46.155 "auth": { 00:16:46.155 "state": "completed", 00:16:46.155 "digest": "sha256", 00:16:46.155 "dhgroup": "ffdhe6144" 00:16:46.155 } 00:16:46.155 } 00:16:46.155 ]' 00:16:46.155 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.155 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.155 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.416 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:46.416 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.416 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.416 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.416 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.675 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:16:46.675 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:16:47.245 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.245 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:47.245 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.245 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.245 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.245 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.245 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.245 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.505 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:47.505 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.505 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.505 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:47.505 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:47.505 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.505 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.505 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.505 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.505 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.505 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.505 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.505 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.766 00:16:47.766 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.766 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.766 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.027 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.027 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.027 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.027 07:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.027 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.027 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.027 { 00:16:48.027 "cntlid": 37, 00:16:48.027 "qid": 0, 00:16:48.027 "state": "enabled", 00:16:48.027 "thread": "nvmf_tgt_poll_group_000", 00:16:48.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:48.027 "listen_address": { 00:16:48.027 "trtype": "TCP", 00:16:48.027 "adrfam": "IPv4", 00:16:48.027 "traddr": "10.0.0.2", 00:16:48.027 "trsvcid": "4420" 00:16:48.027 }, 00:16:48.027 "peer_address": { 00:16:48.027 "trtype": "TCP", 00:16:48.027 "adrfam": "IPv4", 00:16:48.027 "traddr": "10.0.0.1", 00:16:48.027 "trsvcid": "60672" 00:16:48.027 }, 00:16:48.027 "auth": { 00:16:48.027 "state": "completed", 00:16:48.027 "digest": "sha256", 00:16:48.027 "dhgroup": "ffdhe6144" 00:16:48.027 } 00:16:48.027 } 00:16:48.027 ]' 00:16:48.027 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.027 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.027 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.027 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.027 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.027 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.027 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.027 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.287 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:16:48.287 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:16:48.857 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.857 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:48.857 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.857 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.857 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.857 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.857 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.857 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.117 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:49.117 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.117 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.117 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:49.117 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:49.117 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.117 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:49.117 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.117 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.117 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.117 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:49.117 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.117 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.378 00:16:49.378 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.378 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.378 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.639 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.639 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.639 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.639 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.639 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.639 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.639 { 00:16:49.639 "cntlid": 39, 00:16:49.639 "qid": 0, 00:16:49.639 "state": "enabled", 00:16:49.639 "thread": "nvmf_tgt_poll_group_000", 00:16:49.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:49.639 "listen_address": { 00:16:49.639 "trtype": "TCP", 00:16:49.639 "adrfam": "IPv4", 00:16:49.639 "traddr": "10.0.0.2", 00:16:49.639 "trsvcid": "4420" 00:16:49.639 }, 00:16:49.639 "peer_address": { 00:16:49.639 "trtype": "TCP", 00:16:49.639 "adrfam": "IPv4", 00:16:49.639 "traddr": "10.0.0.1", 00:16:49.639 "trsvcid": "60702" 00:16:49.639 }, 00:16:49.639 "auth": { 00:16:49.639 "state": "completed", 00:16:49.639 "digest": "sha256", 00:16:49.639 "dhgroup": "ffdhe6144" 00:16:49.639 } 00:16:49.639 } 00:16:49.639 ]' 00:16:49.639 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.639 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.639 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.639 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.639 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.639 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.639 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.639 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.900 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:16:49.900 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:16:50.471 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.471 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:50.471 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.471 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.471 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.471 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.471 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.471 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:50.471 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:50.732 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:50.732 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.732 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.732 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:50.732 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:50.732 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.732 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.732 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.732 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.732 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.732 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.732 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.732 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.304 00:16:51.304 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.304 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.304 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.304 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.304 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.304 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.304 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.564 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.565 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.565 { 00:16:51.565 "cntlid": 41, 00:16:51.565 "qid": 0, 00:16:51.565 "state": "enabled", 00:16:51.565 "thread": "nvmf_tgt_poll_group_000", 00:16:51.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:51.565 "listen_address": { 00:16:51.565 "trtype": "TCP", 00:16:51.565 "adrfam": "IPv4", 00:16:51.565 "traddr": "10.0.0.2", 00:16:51.565 "trsvcid": "4420" 00:16:51.565 }, 00:16:51.565 "peer_address": { 00:16:51.565 "trtype": "TCP", 00:16:51.565 "adrfam": "IPv4", 00:16:51.565 "traddr": "10.0.0.1", 00:16:51.565 "trsvcid": "60734" 00:16:51.565 }, 00:16:51.565 "auth": { 00:16:51.565 "state": "completed", 00:16:51.565 "digest": "sha256", 00:16:51.565 "dhgroup": "ffdhe8192" 00:16:51.565 } 00:16:51.565 } 00:16:51.565 ]' 00:16:51.565 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.565 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.565 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.565 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.565 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.565 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.565 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.565 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.826 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:16:51.826 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:16:52.398 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.398 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:52.398 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.398 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.398 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.399 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.399 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.399 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.659 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:52.659 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.659 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.659 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:52.659 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:52.659 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.659 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.659 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.659 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.659 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.659 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.659 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.659 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.231 00:16:53.231 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.231 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.231 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.231 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.231 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.231 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.231 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.231 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.231 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.231 { 00:16:53.231 "cntlid": 43, 00:16:53.231 "qid": 0, 00:16:53.231 "state": "enabled", 00:16:53.231 "thread": "nvmf_tgt_poll_group_000", 00:16:53.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:53.231 "listen_address": { 00:16:53.231 "trtype": "TCP", 00:16:53.231 "adrfam": "IPv4", 00:16:53.231 "traddr": "10.0.0.2", 00:16:53.231 "trsvcid": "4420" 00:16:53.231 }, 00:16:53.231 "peer_address": { 00:16:53.231 "trtype": "TCP", 00:16:53.231 "adrfam": "IPv4", 00:16:53.231 "traddr": "10.0.0.1", 00:16:53.231 "trsvcid": "60768" 00:16:53.231 }, 00:16:53.231 "auth": { 00:16:53.231 "state": "completed", 00:16:53.231 "digest": "sha256", 00:16:53.231 "dhgroup": "ffdhe8192" 00:16:53.231 } 00:16:53.231 } 00:16:53.231 ]' 00:16:53.231 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.231 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.231 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.231 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.231 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.492 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.492 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.492 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.492 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:16:53.492 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.436 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.008 00:16:55.008 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.008 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.008 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.008 07:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.008 07:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.008 07:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.008 07:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.008 07:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.008 07:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.008 { 00:16:55.008 "cntlid": 45, 00:16:55.008 "qid": 0, 00:16:55.008 "state": "enabled", 00:16:55.008 "thread": "nvmf_tgt_poll_group_000", 00:16:55.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:55.008 "listen_address": { 00:16:55.008 "trtype": "TCP", 00:16:55.008 "adrfam": "IPv4", 00:16:55.008 "traddr": "10.0.0.2", 00:16:55.008 "trsvcid": "4420" 00:16:55.008 }, 00:16:55.008 "peer_address": { 00:16:55.008 "trtype": "TCP", 00:16:55.008 "adrfam": "IPv4", 00:16:55.008 "traddr": "10.0.0.1", 00:16:55.008 "trsvcid": "60790" 00:16:55.008 }, 00:16:55.008 "auth": { 00:16:55.008 "state": "completed", 00:16:55.008 "digest": "sha256", 00:16:55.008 "dhgroup": "ffdhe8192" 00:16:55.008 } 00:16:55.008 } 00:16:55.008 ]' 00:16:55.008 07:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.008 07:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.274 07:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.274 07:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.274 07:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.274 07:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.274 07:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.274 07:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.534 07:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:16:55.534 07:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:16:56.106 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.106 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:56.106 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.106 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.106 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.106 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.106 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.106 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.366 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:56.366 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.366 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.366 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:56.366 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:56.366 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.366 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:56.366 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.366 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.366 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.366 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.366 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.366 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.627 00:16:56.888 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.888 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.888 07:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.888 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.888 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.888 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.888 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.888 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.888 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.888 { 00:16:56.888 "cntlid": 47, 00:16:56.888 "qid": 0, 00:16:56.888 "state": "enabled", 00:16:56.888 "thread": "nvmf_tgt_poll_group_000", 00:16:56.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:56.888 "listen_address": { 00:16:56.888 "trtype": "TCP", 00:16:56.888 "adrfam": "IPv4", 00:16:56.888 "traddr": "10.0.0.2", 00:16:56.888 "trsvcid": "4420" 00:16:56.888 }, 00:16:56.888 "peer_address": { 00:16:56.888 "trtype": "TCP", 00:16:56.888 "adrfam": "IPv4", 00:16:56.888 "traddr": "10.0.0.1", 00:16:56.888 "trsvcid": "43214" 00:16:56.888 }, 00:16:56.888 "auth": { 00:16:56.888 "state": "completed", 00:16:56.888 "digest": "sha256", 00:16:56.888 "dhgroup": "ffdhe8192" 00:16:56.888 } 00:16:56.888 } 00:16:56.888 ]' 00:16:56.888 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.888 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.888 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.149 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.149 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.149 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.149 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.149 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.149 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:16:57.149 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:16:58.091 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.091 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:58.091 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.091 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.091 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.091 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:58.091 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.091 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.091 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:58.091 07:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:58.091 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:58.091 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.091 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.091 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:58.091 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:58.091 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.091 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.091 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.091 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.091 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.091 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.091 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.091 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.351 00:16:58.351 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.351 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.351 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.611 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.611 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.611 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.611 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.612 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.612 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.612 { 00:16:58.612 "cntlid": 49, 00:16:58.612 "qid": 0, 00:16:58.612 "state": "enabled", 00:16:58.612 "thread": "nvmf_tgt_poll_group_000", 00:16:58.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:58.612 "listen_address": { 00:16:58.612 "trtype": "TCP", 00:16:58.612 "adrfam": "IPv4", 00:16:58.612 "traddr": "10.0.0.2", 00:16:58.612 "trsvcid": "4420" 00:16:58.612 }, 00:16:58.612 "peer_address": { 00:16:58.612 "trtype": "TCP", 00:16:58.612 "adrfam": "IPv4", 00:16:58.612 "traddr": "10.0.0.1", 00:16:58.612 "trsvcid": "43236" 00:16:58.612 }, 00:16:58.612 "auth": { 00:16:58.612 "state": "completed", 00:16:58.612 "digest": "sha384", 00:16:58.612 "dhgroup": "null" 00:16:58.612 } 00:16:58.612 } 00:16:58.612 ]' 00:16:58.612 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.612 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.612 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.612 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:58.612 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.612 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.612 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.612 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.872 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:16:58.872 07:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:16:59.442 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.442 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:59.442 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.442 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.443 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.443 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.443 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:59.443 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:59.703 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:59.703 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.703 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:59.703 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:59.703 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:59.703 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.703 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.703 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.703 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.703 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.703 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.703 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.703 07:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.963 00:16:59.963 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.963 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.963 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.224 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.224 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.224 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.224 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.224 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.224 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.224 { 00:17:00.224 "cntlid": 51, 00:17:00.224 "qid": 0, 00:17:00.224 "state": "enabled", 00:17:00.224 "thread": "nvmf_tgt_poll_group_000", 00:17:00.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:00.224 "listen_address": { 00:17:00.224 "trtype": "TCP", 00:17:00.224 "adrfam": "IPv4", 00:17:00.224 "traddr": "10.0.0.2", 00:17:00.224 "trsvcid": "4420" 00:17:00.224 }, 00:17:00.224 "peer_address": { 00:17:00.224 "trtype": "TCP", 00:17:00.224 "adrfam": "IPv4", 00:17:00.224 "traddr": "10.0.0.1", 00:17:00.224 "trsvcid": "43262" 00:17:00.224 }, 00:17:00.224 "auth": { 00:17:00.224 "state": "completed", 00:17:00.224 "digest": "sha384", 00:17:00.224 "dhgroup": "null" 00:17:00.224 } 00:17:00.224 } 00:17:00.224 ]' 00:17:00.224 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.224 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.224 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.224 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:00.224 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.224 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.224 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.224 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.485 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:00.485 07:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:01.055 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.055 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:01.055 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.055 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.055 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.055 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.055 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:01.055 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:01.315 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:01.315 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.315 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.315 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:01.315 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:01.315 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.315 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.315 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.315 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.315 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.315 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.315 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.315 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.575 00:17:01.575 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.575 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.575 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.835 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.835 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.835 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.835 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.835 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.835 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.835 { 00:17:01.835 "cntlid": 53, 00:17:01.835 "qid": 0, 00:17:01.835 "state": "enabled", 00:17:01.835 "thread": "nvmf_tgt_poll_group_000", 00:17:01.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:01.835 "listen_address": { 00:17:01.835 "trtype": "TCP", 00:17:01.835 "adrfam": "IPv4", 00:17:01.835 "traddr": "10.0.0.2", 00:17:01.835 "trsvcid": "4420" 00:17:01.835 }, 00:17:01.835 "peer_address": { 00:17:01.835 "trtype": "TCP", 00:17:01.835 "adrfam": "IPv4", 00:17:01.835 "traddr": "10.0.0.1", 00:17:01.835 "trsvcid": "43286" 00:17:01.835 }, 00:17:01.835 "auth": { 00:17:01.835 "state": "completed", 00:17:01.835 "digest": "sha384", 00:17:01.835 "dhgroup": "null" 00:17:01.835 } 00:17:01.835 } 00:17:01.835 ]' 00:17:01.835 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.835 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.835 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.835 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:01.835 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.835 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.835 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.835 07:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.096 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:02.096 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:02.666 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.666 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:02.666 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.666 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.666 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.666 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.666 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.666 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.927 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:02.927 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.927 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.927 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:02.927 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:02.927 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.927 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:02.927 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.927 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.927 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.927 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:02.927 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.927 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.188 00:17:03.188 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.188 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.188 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.449 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.449 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.449 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.449 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.449 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.449 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.449 { 00:17:03.449 "cntlid": 55, 00:17:03.449 "qid": 0, 00:17:03.449 "state": "enabled", 00:17:03.449 "thread": "nvmf_tgt_poll_group_000", 00:17:03.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:03.449 "listen_address": { 00:17:03.449 "trtype": "TCP", 00:17:03.449 "adrfam": "IPv4", 00:17:03.449 "traddr": "10.0.0.2", 00:17:03.449 "trsvcid": "4420" 00:17:03.449 }, 00:17:03.449 "peer_address": { 00:17:03.449 "trtype": "TCP", 00:17:03.449 "adrfam": "IPv4", 00:17:03.449 "traddr": "10.0.0.1", 00:17:03.449 "trsvcid": "43308" 00:17:03.449 }, 00:17:03.449 "auth": { 00:17:03.449 "state": "completed", 00:17:03.449 "digest": "sha384", 00:17:03.449 "dhgroup": "null" 00:17:03.449 } 00:17:03.449 } 00:17:03.449 ]' 00:17:03.449 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.449 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.449 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.449 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:03.449 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.449 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.449 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.449 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.711 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:03.711 07:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:04.282 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.282 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:04.282 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.282 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.282 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.282 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.282 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.282 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:04.282 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:04.543 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:04.543 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.543 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.543 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:04.543 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:04.543 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.543 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.543 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.543 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.543 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.543 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.543 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.543 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.804 00:17:04.804 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.804 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.804 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.066 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.066 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.066 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.066 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.066 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.066 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.066 { 00:17:05.066 "cntlid": 57, 00:17:05.066 "qid": 0, 00:17:05.066 "state": "enabled", 00:17:05.066 "thread": "nvmf_tgt_poll_group_000", 00:17:05.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:05.066 "listen_address": { 00:17:05.066 "trtype": "TCP", 00:17:05.066 "adrfam": "IPv4", 00:17:05.066 "traddr": "10.0.0.2", 00:17:05.066 "trsvcid": "4420" 00:17:05.066 }, 00:17:05.066 "peer_address": { 00:17:05.066 "trtype": "TCP", 00:17:05.066 "adrfam": "IPv4", 00:17:05.066 "traddr": "10.0.0.1", 00:17:05.066 "trsvcid": "45836" 00:17:05.066 }, 00:17:05.066 "auth": { 00:17:05.066 "state": "completed", 00:17:05.066 "digest": "sha384", 00:17:05.066 "dhgroup": "ffdhe2048" 00:17:05.066 } 00:17:05.066 } 00:17:05.066 ]' 00:17:05.066 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.066 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.066 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.066 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:05.066 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.066 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.066 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.066 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.327 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:05.327 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:05.899 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.899 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:05.899 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.900 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.900 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.900 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.900 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:05.900 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.161 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:06.161 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.161 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.161 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:06.161 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:06.161 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.161 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.161 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.161 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.161 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.161 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.161 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.161 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.422 00:17:06.422 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.422 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.422 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.422 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.422 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.422 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.423 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.423 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.423 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.423 { 00:17:06.423 "cntlid": 59, 00:17:06.423 "qid": 0, 00:17:06.423 "state": "enabled", 00:17:06.423 "thread": "nvmf_tgt_poll_group_000", 00:17:06.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:06.423 "listen_address": { 00:17:06.423 "trtype": "TCP", 00:17:06.423 "adrfam": "IPv4", 00:17:06.423 "traddr": "10.0.0.2", 00:17:06.423 "trsvcid": "4420" 00:17:06.423 }, 00:17:06.423 "peer_address": { 00:17:06.423 "trtype": "TCP", 00:17:06.423 "adrfam": "IPv4", 00:17:06.423 "traddr": "10.0.0.1", 00:17:06.423 "trsvcid": "45858" 00:17:06.423 }, 00:17:06.423 "auth": { 00:17:06.423 "state": "completed", 00:17:06.423 "digest": "sha384", 00:17:06.423 "dhgroup": "ffdhe2048" 00:17:06.423 } 00:17:06.423 } 00:17:06.423 ]' 00:17:06.423 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.683 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.683 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.683 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:06.683 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.683 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.683 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.683 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.944 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:06.944 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:07.516 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.516 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:07.516 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.516 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.516 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.516 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.516 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.516 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.776 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:07.776 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.776 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.776 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:07.776 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:07.776 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.776 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.776 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.776 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.776 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.776 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.776 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.776 07:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.037 00:17:08.037 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.037 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.037 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.037 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.037 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.037 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.037 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.037 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.298 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.298 { 00:17:08.298 "cntlid": 61, 00:17:08.298 "qid": 0, 00:17:08.298 "state": "enabled", 00:17:08.298 "thread": "nvmf_tgt_poll_group_000", 00:17:08.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:08.298 "listen_address": { 00:17:08.299 "trtype": "TCP", 00:17:08.299 "adrfam": "IPv4", 00:17:08.299 "traddr": "10.0.0.2", 00:17:08.299 "trsvcid": "4420" 00:17:08.299 }, 00:17:08.299 "peer_address": { 00:17:08.299 "trtype": "TCP", 00:17:08.299 "adrfam": "IPv4", 00:17:08.299 "traddr": "10.0.0.1", 00:17:08.299 "trsvcid": "45888" 00:17:08.299 }, 00:17:08.299 "auth": { 00:17:08.299 "state": "completed", 00:17:08.299 "digest": "sha384", 00:17:08.299 "dhgroup": "ffdhe2048" 00:17:08.299 } 00:17:08.299 } 00:17:08.299 ]' 00:17:08.299 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.299 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.299 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.299 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.299 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.299 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.299 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.299 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.560 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:08.560 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:09.133 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.133 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:09.133 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.133 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.133 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.133 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.133 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.133 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.394 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:09.394 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.394 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.394 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:09.394 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:09.394 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.394 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:09.394 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.394 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.394 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.394 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:09.394 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.394 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.654 00:17:09.654 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.654 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.654 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.654 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.915 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.915 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.915 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.915 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.915 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.915 { 00:17:09.915 "cntlid": 63, 00:17:09.915 "qid": 0, 00:17:09.915 "state": "enabled", 00:17:09.915 "thread": "nvmf_tgt_poll_group_000", 00:17:09.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:09.915 "listen_address": { 00:17:09.915 "trtype": "TCP", 00:17:09.915 "adrfam": "IPv4", 00:17:09.915 "traddr": "10.0.0.2", 00:17:09.915 "trsvcid": "4420" 00:17:09.915 }, 00:17:09.915 "peer_address": { 00:17:09.915 "trtype": "TCP", 00:17:09.915 "adrfam": "IPv4", 00:17:09.915 "traddr": "10.0.0.1", 00:17:09.915 "trsvcid": "45916" 00:17:09.915 }, 00:17:09.915 "auth": { 00:17:09.915 "state": "completed", 00:17:09.915 "digest": "sha384", 00:17:09.915 "dhgroup": "ffdhe2048" 00:17:09.915 } 00:17:09.915 } 00:17:09.915 ]' 00:17:09.915 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.915 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.915 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.915 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.915 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.915 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.915 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.915 07:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.176 07:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:10.176 07:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:10.747 07:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.747 07:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:10.747 07:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.747 07:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.747 07:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.747 07:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.747 07:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.747 07:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:10.747 07:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:11.008 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:11.008 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.008 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.008 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:11.008 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:11.008 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.008 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.008 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.008 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.008 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.008 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.008 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.008 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.268 00:17:11.269 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.269 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.269 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.529 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.529 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.529 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.529 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.529 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.529 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.529 { 00:17:11.529 "cntlid": 65, 00:17:11.529 "qid": 0, 00:17:11.529 "state": "enabled", 00:17:11.529 "thread": "nvmf_tgt_poll_group_000", 00:17:11.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:11.529 "listen_address": { 00:17:11.529 "trtype": "TCP", 00:17:11.529 "adrfam": "IPv4", 00:17:11.529 "traddr": "10.0.0.2", 00:17:11.529 "trsvcid": "4420" 00:17:11.529 }, 00:17:11.529 "peer_address": { 00:17:11.529 "trtype": "TCP", 00:17:11.529 "adrfam": "IPv4", 00:17:11.529 "traddr": "10.0.0.1", 00:17:11.529 "trsvcid": "45944" 00:17:11.529 }, 00:17:11.529 "auth": { 00:17:11.529 "state": "completed", 00:17:11.529 "digest": "sha384", 00:17:11.529 "dhgroup": "ffdhe3072" 00:17:11.529 } 00:17:11.529 } 00:17:11.529 ]' 00:17:11.529 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.529 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.529 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.529 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:11.529 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.529 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.529 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.529 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.789 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:11.790 07:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:12.360 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.360 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:12.360 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.360 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.360 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.360 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.360 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:12.360 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:12.621 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:12.621 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.621 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.621 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:12.621 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:12.621 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.621 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.621 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.621 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.621 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.621 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.621 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.621 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.881 00:17:12.881 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.881 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.881 07:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.141 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.141 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.141 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.141 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.141 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.141 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.141 { 00:17:13.141 "cntlid": 67, 00:17:13.141 "qid": 0, 00:17:13.141 "state": "enabled", 00:17:13.141 "thread": "nvmf_tgt_poll_group_000", 00:17:13.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:13.141 "listen_address": { 00:17:13.141 "trtype": "TCP", 00:17:13.141 "adrfam": "IPv4", 00:17:13.141 "traddr": "10.0.0.2", 00:17:13.141 "trsvcid": "4420" 00:17:13.141 }, 00:17:13.141 "peer_address": { 00:17:13.141 "trtype": "TCP", 00:17:13.141 "adrfam": "IPv4", 00:17:13.141 "traddr": "10.0.0.1", 00:17:13.141 "trsvcid": "45978" 00:17:13.141 }, 00:17:13.141 "auth": { 00:17:13.141 "state": "completed", 00:17:13.141 "digest": "sha384", 00:17:13.141 "dhgroup": "ffdhe3072" 00:17:13.141 } 00:17:13.141 } 00:17:13.141 ]' 00:17:13.141 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.141 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.141 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.141 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.141 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.141 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.141 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.141 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.401 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:13.401 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:13.972 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.972 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:13.972 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.972 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.973 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.973 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.973 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:13.973 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.233 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:14.233 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.233 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.233 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:14.233 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:14.233 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.233 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.233 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.233 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.233 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.233 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.233 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.233 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.494 00:17:14.494 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.495 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.495 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.755 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.755 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.755 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.755 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.755 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.755 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.755 { 00:17:14.755 "cntlid": 69, 00:17:14.755 "qid": 0, 00:17:14.755 "state": "enabled", 00:17:14.755 "thread": "nvmf_tgt_poll_group_000", 00:17:14.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:14.755 "listen_address": { 00:17:14.755 "trtype": "TCP", 00:17:14.755 "adrfam": "IPv4", 00:17:14.755 "traddr": "10.0.0.2", 00:17:14.755 "trsvcid": "4420" 00:17:14.755 }, 00:17:14.755 "peer_address": { 00:17:14.755 "trtype": "TCP", 00:17:14.755 "adrfam": "IPv4", 00:17:14.755 "traddr": "10.0.0.1", 00:17:14.755 "trsvcid": "46010" 00:17:14.755 }, 00:17:14.755 "auth": { 00:17:14.755 "state": "completed", 00:17:14.755 "digest": "sha384", 00:17:14.755 "dhgroup": "ffdhe3072" 00:17:14.755 } 00:17:14.755 } 00:17:14.755 ]' 00:17:14.755 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.755 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.755 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.755 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.755 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.755 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.755 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.755 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.017 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:15.017 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:15.589 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.589 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:15.589 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.589 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.589 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.589 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.589 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.589 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.850 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:15.850 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.850 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.850 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:15.850 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:15.850 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.850 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:15.850 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.850 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.850 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.850 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.850 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.850 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.133 00:17:16.133 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.133 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.133 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.396 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.396 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.396 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.396 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.396 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.396 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.396 { 00:17:16.396 "cntlid": 71, 00:17:16.396 "qid": 0, 00:17:16.396 "state": "enabled", 00:17:16.396 "thread": "nvmf_tgt_poll_group_000", 00:17:16.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:16.396 "listen_address": { 00:17:16.396 "trtype": "TCP", 00:17:16.396 "adrfam": "IPv4", 00:17:16.396 "traddr": "10.0.0.2", 00:17:16.396 "trsvcid": "4420" 00:17:16.396 }, 00:17:16.396 "peer_address": { 00:17:16.396 "trtype": "TCP", 00:17:16.396 "adrfam": "IPv4", 00:17:16.396 "traddr": "10.0.0.1", 00:17:16.396 "trsvcid": "55738" 00:17:16.396 }, 00:17:16.396 "auth": { 00:17:16.396 "state": "completed", 00:17:16.396 "digest": "sha384", 00:17:16.396 "dhgroup": "ffdhe3072" 00:17:16.396 } 00:17:16.396 } 00:17:16.396 ]' 00:17:16.396 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.396 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.396 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.396 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.396 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.396 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.396 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.396 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.657 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:16.657 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:17.226 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.226 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:17.226 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.226 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.226 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.226 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.226 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.226 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:17.226 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:17.487 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:17.487 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.487 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.487 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:17.487 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:17.487 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.487 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.487 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.487 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.487 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.487 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.487 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.487 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.748 00:17:17.748 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.748 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.748 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.748 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.748 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.748 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.748 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.008 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.008 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.008 { 00:17:18.008 "cntlid": 73, 00:17:18.008 "qid": 0, 00:17:18.008 "state": "enabled", 00:17:18.008 "thread": "nvmf_tgt_poll_group_000", 00:17:18.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:18.008 "listen_address": { 00:17:18.008 "trtype": "TCP", 00:17:18.008 "adrfam": "IPv4", 00:17:18.008 "traddr": "10.0.0.2", 00:17:18.008 "trsvcid": "4420" 00:17:18.008 }, 00:17:18.008 "peer_address": { 00:17:18.008 "trtype": "TCP", 00:17:18.008 "adrfam": "IPv4", 00:17:18.008 "traddr": "10.0.0.1", 00:17:18.008 "trsvcid": "55750" 00:17:18.008 }, 00:17:18.008 "auth": { 00:17:18.008 "state": "completed", 00:17:18.008 "digest": "sha384", 00:17:18.008 "dhgroup": "ffdhe4096" 00:17:18.008 } 00:17:18.008 } 00:17:18.008 ]' 00:17:18.008 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.008 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.008 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.008 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:18.008 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.008 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.008 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.008 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.269 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:18.269 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:18.841 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.841 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:18.841 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.841 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.841 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.841 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.841 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.841 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.102 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:19.102 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.102 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.102 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:19.102 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:19.102 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.102 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.102 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.102 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.102 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.102 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.102 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.102 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.362 00:17:19.362 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.362 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.362 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.623 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.623 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.623 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.623 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.623 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.623 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.623 { 00:17:19.623 "cntlid": 75, 00:17:19.623 "qid": 0, 00:17:19.623 "state": "enabled", 00:17:19.623 "thread": "nvmf_tgt_poll_group_000", 00:17:19.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:19.623 "listen_address": { 00:17:19.623 "trtype": "TCP", 00:17:19.623 "adrfam": "IPv4", 00:17:19.623 "traddr": "10.0.0.2", 00:17:19.623 "trsvcid": "4420" 00:17:19.623 }, 00:17:19.623 "peer_address": { 00:17:19.623 "trtype": "TCP", 00:17:19.623 "adrfam": "IPv4", 00:17:19.623 "traddr": "10.0.0.1", 00:17:19.623 "trsvcid": "55784" 00:17:19.623 }, 00:17:19.623 "auth": { 00:17:19.623 "state": "completed", 00:17:19.623 "digest": "sha384", 00:17:19.623 "dhgroup": "ffdhe4096" 00:17:19.623 } 00:17:19.623 } 00:17:19.623 ]' 00:17:19.623 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.623 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.623 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.623 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:19.623 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.623 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.623 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.623 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.883 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:19.883 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:20.453 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.453 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:20.453 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.453 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.453 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.453 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.453 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.453 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.714 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:20.714 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.714 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.714 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:20.714 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:20.714 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.714 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.714 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.714 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.714 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.714 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.714 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.714 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.974 00:17:20.975 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.975 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.975 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.234 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.234 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.234 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.234 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.234 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.234 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.234 { 00:17:21.234 "cntlid": 77, 00:17:21.234 "qid": 0, 00:17:21.234 "state": "enabled", 00:17:21.234 "thread": "nvmf_tgt_poll_group_000", 00:17:21.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:21.234 "listen_address": { 00:17:21.234 "trtype": "TCP", 00:17:21.234 "adrfam": "IPv4", 00:17:21.234 "traddr": "10.0.0.2", 00:17:21.234 "trsvcid": "4420" 00:17:21.234 }, 00:17:21.234 "peer_address": { 00:17:21.234 "trtype": "TCP", 00:17:21.234 "adrfam": "IPv4", 00:17:21.234 "traddr": "10.0.0.1", 00:17:21.234 "trsvcid": "55816" 00:17:21.234 }, 00:17:21.234 "auth": { 00:17:21.234 "state": "completed", 00:17:21.234 "digest": "sha384", 00:17:21.234 "dhgroup": "ffdhe4096" 00:17:21.234 } 00:17:21.234 } 00:17:21.234 ]' 00:17:21.234 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.234 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.234 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.234 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.234 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.234 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.234 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.234 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.494 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:21.494 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:22.112 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.112 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:22.112 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.112 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.112 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.112 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.112 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.112 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.396 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:22.396 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.396 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.396 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:22.396 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:22.396 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.396 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:22.396 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.396 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.396 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.396 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:22.396 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.396 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.657 00:17:22.657 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.657 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.657 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.917 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.917 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.918 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.918 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.918 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.918 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.918 { 00:17:22.918 "cntlid": 79, 00:17:22.918 "qid": 0, 00:17:22.918 "state": "enabled", 00:17:22.918 "thread": "nvmf_tgt_poll_group_000", 00:17:22.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:22.918 "listen_address": { 00:17:22.918 "trtype": "TCP", 00:17:22.918 "adrfam": "IPv4", 00:17:22.918 "traddr": "10.0.0.2", 00:17:22.918 "trsvcid": "4420" 00:17:22.918 }, 00:17:22.918 "peer_address": { 00:17:22.918 "trtype": "TCP", 00:17:22.918 "adrfam": "IPv4", 00:17:22.918 "traddr": "10.0.0.1", 00:17:22.918 "trsvcid": "55838" 00:17:22.918 }, 00:17:22.918 "auth": { 00:17:22.918 "state": "completed", 00:17:22.918 "digest": "sha384", 00:17:22.918 "dhgroup": "ffdhe4096" 00:17:22.918 } 00:17:22.918 } 00:17:22.918 ]' 00:17:22.918 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.918 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.918 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.918 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.918 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.918 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.918 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.918 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.179 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:23.179 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:23.750 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.750 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:23.750 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.750 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.750 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.750 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.750 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.750 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:23.750 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:24.010 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:24.010 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.010 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.010 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:24.010 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:24.010 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.011 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.011 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.011 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.011 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.011 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.011 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.011 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.271 00:17:24.271 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.271 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.271 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.532 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.532 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.532 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.532 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.532 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.532 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.532 { 00:17:24.532 "cntlid": 81, 00:17:24.532 "qid": 0, 00:17:24.532 "state": "enabled", 00:17:24.532 "thread": "nvmf_tgt_poll_group_000", 00:17:24.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:24.532 "listen_address": { 00:17:24.532 "trtype": "TCP", 00:17:24.532 "adrfam": "IPv4", 00:17:24.532 "traddr": "10.0.0.2", 00:17:24.532 "trsvcid": "4420" 00:17:24.532 }, 00:17:24.532 "peer_address": { 00:17:24.532 "trtype": "TCP", 00:17:24.532 "adrfam": "IPv4", 00:17:24.532 "traddr": "10.0.0.1", 00:17:24.532 "trsvcid": "55882" 00:17:24.532 }, 00:17:24.532 "auth": { 00:17:24.532 "state": "completed", 00:17:24.532 "digest": "sha384", 00:17:24.532 "dhgroup": "ffdhe6144" 00:17:24.532 } 00:17:24.532 } 00:17:24.532 ]' 00:17:24.532 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.532 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.532 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.792 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:24.792 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.792 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.792 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.792 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.792 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:24.792 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.732 07:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.992 00:17:25.992 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.992 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.992 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.253 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.253 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.253 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.253 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.253 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.253 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.253 { 00:17:26.253 "cntlid": 83, 00:17:26.253 "qid": 0, 00:17:26.253 "state": "enabled", 00:17:26.253 "thread": "nvmf_tgt_poll_group_000", 00:17:26.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:26.253 "listen_address": { 00:17:26.253 "trtype": "TCP", 00:17:26.253 "adrfam": "IPv4", 00:17:26.253 "traddr": "10.0.0.2", 00:17:26.253 "trsvcid": "4420" 00:17:26.253 }, 00:17:26.253 "peer_address": { 00:17:26.253 "trtype": "TCP", 00:17:26.253 "adrfam": "IPv4", 00:17:26.253 "traddr": "10.0.0.1", 00:17:26.253 "trsvcid": "53074" 00:17:26.253 }, 00:17:26.253 "auth": { 00:17:26.253 "state": "completed", 00:17:26.253 "digest": "sha384", 00:17:26.253 "dhgroup": "ffdhe6144" 00:17:26.253 } 00:17:26.253 } 00:17:26.253 ]' 00:17:26.253 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.253 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.253 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.253 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:26.253 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.514 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.514 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.514 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.514 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:26.514 07:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.456 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.457 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.457 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.457 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.718 00:17:27.718 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.718 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.718 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.979 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.979 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.979 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.979 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.979 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.979 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.979 { 00:17:27.979 "cntlid": 85, 00:17:27.979 "qid": 0, 00:17:27.979 "state": "enabled", 00:17:27.979 "thread": "nvmf_tgt_poll_group_000", 00:17:27.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:27.979 "listen_address": { 00:17:27.979 "trtype": "TCP", 00:17:27.979 "adrfam": "IPv4", 00:17:27.979 "traddr": "10.0.0.2", 00:17:27.979 "trsvcid": "4420" 00:17:27.979 }, 00:17:27.979 "peer_address": { 00:17:27.979 "trtype": "TCP", 00:17:27.979 "adrfam": "IPv4", 00:17:27.979 "traddr": "10.0.0.1", 00:17:27.979 "trsvcid": "53100" 00:17:27.979 }, 00:17:27.979 "auth": { 00:17:27.979 "state": "completed", 00:17:27.979 "digest": "sha384", 00:17:27.979 "dhgroup": "ffdhe6144" 00:17:27.979 } 00:17:27.979 } 00:17:27.979 ]' 00:17:27.979 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.979 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.979 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.240 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.240 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.240 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.240 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.240 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.240 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:28.240 07:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.184 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.756 00:17:29.756 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.756 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.756 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.756 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.756 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.756 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.756 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.756 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.756 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.756 { 00:17:29.756 "cntlid": 87, 00:17:29.756 "qid": 0, 00:17:29.756 "state": "enabled", 00:17:29.756 "thread": "nvmf_tgt_poll_group_000", 00:17:29.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:29.756 "listen_address": { 00:17:29.756 "trtype": "TCP", 00:17:29.756 "adrfam": "IPv4", 00:17:29.756 "traddr": "10.0.0.2", 00:17:29.756 "trsvcid": "4420" 00:17:29.756 }, 00:17:29.756 "peer_address": { 00:17:29.756 "trtype": "TCP", 00:17:29.756 "adrfam": "IPv4", 00:17:29.756 "traddr": "10.0.0.1", 00:17:29.756 "trsvcid": "53124" 00:17:29.756 }, 00:17:29.756 "auth": { 00:17:29.756 "state": "completed", 00:17:29.756 "digest": "sha384", 00:17:29.756 "dhgroup": "ffdhe6144" 00:17:29.756 } 00:17:29.756 } 00:17:29.756 ]' 00:17:29.756 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.017 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.017 07:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.017 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:30.017 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.017 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.017 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.017 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.277 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:30.277 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:30.848 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.849 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:30.849 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.849 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.849 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.849 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.849 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.849 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:30.849 07:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:31.123 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:31.123 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.123 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.123 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:31.123 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.123 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.123 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.123 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.123 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.123 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.123 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.123 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.123 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.386 00:17:31.386 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.386 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.386 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.647 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.647 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.647 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.647 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.647 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.647 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.647 { 00:17:31.647 "cntlid": 89, 00:17:31.647 "qid": 0, 00:17:31.647 "state": "enabled", 00:17:31.647 "thread": "nvmf_tgt_poll_group_000", 00:17:31.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:31.647 "listen_address": { 00:17:31.647 "trtype": "TCP", 00:17:31.647 "adrfam": "IPv4", 00:17:31.647 "traddr": "10.0.0.2", 00:17:31.647 "trsvcid": "4420" 00:17:31.647 }, 00:17:31.647 "peer_address": { 00:17:31.647 "trtype": "TCP", 00:17:31.647 "adrfam": "IPv4", 00:17:31.647 "traddr": "10.0.0.1", 00:17:31.647 "trsvcid": "53154" 00:17:31.647 }, 00:17:31.647 "auth": { 00:17:31.647 "state": "completed", 00:17:31.647 "digest": "sha384", 00:17:31.647 "dhgroup": "ffdhe8192" 00:17:31.647 } 00:17:31.647 } 00:17:31.647 ]' 00:17:31.647 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.647 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.647 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.908 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:31.908 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.908 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.908 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.908 07:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.908 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:31.908 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.851 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.852 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.424 00:17:33.424 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.424 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.424 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.424 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.686 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.686 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.686 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.686 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.686 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.686 { 00:17:33.686 "cntlid": 91, 00:17:33.686 "qid": 0, 00:17:33.686 "state": "enabled", 00:17:33.686 "thread": "nvmf_tgt_poll_group_000", 00:17:33.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:33.686 "listen_address": { 00:17:33.686 "trtype": "TCP", 00:17:33.686 "adrfam": "IPv4", 00:17:33.686 "traddr": "10.0.0.2", 00:17:33.686 "trsvcid": "4420" 00:17:33.686 }, 00:17:33.686 "peer_address": { 00:17:33.686 "trtype": "TCP", 00:17:33.686 "adrfam": "IPv4", 00:17:33.686 "traddr": "10.0.0.1", 00:17:33.686 "trsvcid": "53174" 00:17:33.686 }, 00:17:33.686 "auth": { 00:17:33.686 "state": "completed", 00:17:33.686 "digest": "sha384", 00:17:33.686 "dhgroup": "ffdhe8192" 00:17:33.686 } 00:17:33.686 } 00:17:33.686 ]' 00:17:33.686 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.686 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.686 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.686 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:33.686 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.686 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.686 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.686 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.948 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:33.948 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:34.520 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.520 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:34.520 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.520 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.520 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.520 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.520 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:34.520 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:34.781 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:34.781 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.781 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.781 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:34.781 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:34.781 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.781 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.781 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.781 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.781 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.781 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.781 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.781 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.354 00:17:35.354 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.354 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.354 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.354 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.354 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.354 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.354 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.354 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.354 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.354 { 00:17:35.354 "cntlid": 93, 00:17:35.354 "qid": 0, 00:17:35.354 "state": "enabled", 00:17:35.354 "thread": "nvmf_tgt_poll_group_000", 00:17:35.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:35.354 "listen_address": { 00:17:35.354 "trtype": "TCP", 00:17:35.354 "adrfam": "IPv4", 00:17:35.354 "traddr": "10.0.0.2", 00:17:35.354 "trsvcid": "4420" 00:17:35.354 }, 00:17:35.354 "peer_address": { 00:17:35.354 "trtype": "TCP", 00:17:35.354 "adrfam": "IPv4", 00:17:35.354 "traddr": "10.0.0.1", 00:17:35.354 "trsvcid": "60226" 00:17:35.354 }, 00:17:35.355 "auth": { 00:17:35.355 "state": "completed", 00:17:35.355 "digest": "sha384", 00:17:35.355 "dhgroup": "ffdhe8192" 00:17:35.355 } 00:17:35.355 } 00:17:35.355 ]' 00:17:35.355 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.355 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.355 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.616 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.616 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.616 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.616 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.616 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.616 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:35.616 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.558 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.130 00:17:37.130 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.130 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.130 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.130 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.130 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.130 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.130 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.130 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.130 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.130 { 00:17:37.130 "cntlid": 95, 00:17:37.130 "qid": 0, 00:17:37.130 "state": "enabled", 00:17:37.130 "thread": "nvmf_tgt_poll_group_000", 00:17:37.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:37.130 "listen_address": { 00:17:37.130 "trtype": "TCP", 00:17:37.130 "adrfam": "IPv4", 00:17:37.130 "traddr": "10.0.0.2", 00:17:37.130 "trsvcid": "4420" 00:17:37.130 }, 00:17:37.130 "peer_address": { 00:17:37.130 "trtype": "TCP", 00:17:37.130 "adrfam": "IPv4", 00:17:37.130 "traddr": "10.0.0.1", 00:17:37.130 "trsvcid": "60254" 00:17:37.130 }, 00:17:37.130 "auth": { 00:17:37.130 "state": "completed", 00:17:37.130 "digest": "sha384", 00:17:37.130 "dhgroup": "ffdhe8192" 00:17:37.130 } 00:17:37.130 } 00:17:37.130 ]' 00:17:37.130 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.391 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.391 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.391 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.391 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.391 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.391 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.391 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.652 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:37.652 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:38.223 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.223 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:38.223 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.223 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.223 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.223 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:38.223 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.223 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.223 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:38.223 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:38.484 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:38.484 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.484 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.484 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:38.484 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:38.484 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.484 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.484 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.484 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.484 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.484 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.484 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.484 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.484 00:17:38.745 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.745 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.745 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.745 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.745 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.745 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.745 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.745 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.745 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.745 { 00:17:38.745 "cntlid": 97, 00:17:38.745 "qid": 0, 00:17:38.745 "state": "enabled", 00:17:38.745 "thread": "nvmf_tgt_poll_group_000", 00:17:38.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:38.745 "listen_address": { 00:17:38.745 "trtype": "TCP", 00:17:38.745 "adrfam": "IPv4", 00:17:38.745 "traddr": "10.0.0.2", 00:17:38.745 "trsvcid": "4420" 00:17:38.745 }, 00:17:38.745 "peer_address": { 00:17:38.745 "trtype": "TCP", 00:17:38.745 "adrfam": "IPv4", 00:17:38.745 "traddr": "10.0.0.1", 00:17:38.745 "trsvcid": "60292" 00:17:38.745 }, 00:17:38.745 "auth": { 00:17:38.745 "state": "completed", 00:17:38.745 "digest": "sha512", 00:17:38.745 "dhgroup": "null" 00:17:38.745 } 00:17:38.745 } 00:17:38.745 ]' 00:17:38.746 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.746 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.746 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.007 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:39.007 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.007 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.007 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.007 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.007 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:39.007 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:39.948 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.948 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:39.948 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.948 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.948 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.948 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.948 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:39.948 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:39.948 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:39.948 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.948 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.948 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:39.948 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.948 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.948 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.948 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.948 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.948 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.948 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.948 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.948 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.208 00:17:40.208 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.208 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.208 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.468 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.468 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.468 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.468 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.468 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.468 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.468 { 00:17:40.468 "cntlid": 99, 00:17:40.468 "qid": 0, 00:17:40.468 "state": "enabled", 00:17:40.468 "thread": "nvmf_tgt_poll_group_000", 00:17:40.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:40.468 "listen_address": { 00:17:40.468 "trtype": "TCP", 00:17:40.468 "adrfam": "IPv4", 00:17:40.468 "traddr": "10.0.0.2", 00:17:40.468 "trsvcid": "4420" 00:17:40.468 }, 00:17:40.468 "peer_address": { 00:17:40.468 "trtype": "TCP", 00:17:40.468 "adrfam": "IPv4", 00:17:40.468 "traddr": "10.0.0.1", 00:17:40.468 "trsvcid": "60304" 00:17:40.468 }, 00:17:40.468 "auth": { 00:17:40.468 "state": "completed", 00:17:40.468 "digest": "sha512", 00:17:40.468 "dhgroup": "null" 00:17:40.468 } 00:17:40.468 } 00:17:40.468 ]' 00:17:40.468 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.468 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.468 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.468 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:40.468 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.468 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.468 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.468 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.728 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:40.728 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:41.299 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.299 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:41.299 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.299 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.299 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.299 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.299 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.299 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.560 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:41.560 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.560 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.560 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:41.560 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:41.560 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.560 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.560 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.560 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.560 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.560 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.560 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.560 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.821 00:17:41.821 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.821 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.821 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.821 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.821 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.821 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.821 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.081 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.081 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.081 { 00:17:42.081 "cntlid": 101, 00:17:42.081 "qid": 0, 00:17:42.081 "state": "enabled", 00:17:42.081 "thread": "nvmf_tgt_poll_group_000", 00:17:42.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:42.081 "listen_address": { 00:17:42.081 "trtype": "TCP", 00:17:42.081 "adrfam": "IPv4", 00:17:42.081 "traddr": "10.0.0.2", 00:17:42.081 "trsvcid": "4420" 00:17:42.081 }, 00:17:42.081 "peer_address": { 00:17:42.081 "trtype": "TCP", 00:17:42.081 "adrfam": "IPv4", 00:17:42.081 "traddr": "10.0.0.1", 00:17:42.081 "trsvcid": "60330" 00:17:42.081 }, 00:17:42.081 "auth": { 00:17:42.081 "state": "completed", 00:17:42.081 "digest": "sha512", 00:17:42.081 "dhgroup": "null" 00:17:42.081 } 00:17:42.081 } 00:17:42.081 ]' 00:17:42.081 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.081 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.081 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.081 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:42.081 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.081 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.081 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.081 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.342 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:42.342 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:42.914 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.914 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:42.914 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.914 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.914 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.914 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.914 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.914 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.175 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:43.175 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.175 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.175 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:43.175 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:43.175 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.175 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:43.175 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.175 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.175 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.175 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.175 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.175 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.436 00:17:43.436 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.436 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.436 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.696 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.696 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.696 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.696 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.696 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.696 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.696 { 00:17:43.696 "cntlid": 103, 00:17:43.696 "qid": 0, 00:17:43.696 "state": "enabled", 00:17:43.696 "thread": "nvmf_tgt_poll_group_000", 00:17:43.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:43.696 "listen_address": { 00:17:43.696 "trtype": "TCP", 00:17:43.696 "adrfam": "IPv4", 00:17:43.696 "traddr": "10.0.0.2", 00:17:43.696 "trsvcid": "4420" 00:17:43.696 }, 00:17:43.696 "peer_address": { 00:17:43.696 "trtype": "TCP", 00:17:43.696 "adrfam": "IPv4", 00:17:43.696 "traddr": "10.0.0.1", 00:17:43.696 "trsvcid": "60356" 00:17:43.696 }, 00:17:43.696 "auth": { 00:17:43.696 "state": "completed", 00:17:43.696 "digest": "sha512", 00:17:43.696 "dhgroup": "null" 00:17:43.696 } 00:17:43.696 } 00:17:43.696 ]' 00:17:43.696 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.696 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.696 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.696 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:43.696 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.696 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.696 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.696 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.957 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:43.957 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:44.529 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.529 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:44.529 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.529 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.529 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.529 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.529 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.529 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:44.529 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:44.790 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:44.790 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.790 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.790 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:44.790 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:44.790 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.790 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.790 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.790 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.790 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.790 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.790 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.790 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.050 00:17:45.050 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.050 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.051 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.311 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.311 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.311 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.311 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.312 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.312 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.312 { 00:17:45.312 "cntlid": 105, 00:17:45.312 "qid": 0, 00:17:45.312 "state": "enabled", 00:17:45.312 "thread": "nvmf_tgt_poll_group_000", 00:17:45.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:45.312 "listen_address": { 00:17:45.312 "trtype": "TCP", 00:17:45.312 "adrfam": "IPv4", 00:17:45.312 "traddr": "10.0.0.2", 00:17:45.312 "trsvcid": "4420" 00:17:45.312 }, 00:17:45.312 "peer_address": { 00:17:45.312 "trtype": "TCP", 00:17:45.312 "adrfam": "IPv4", 00:17:45.312 "traddr": "10.0.0.1", 00:17:45.312 "trsvcid": "41086" 00:17:45.312 }, 00:17:45.312 "auth": { 00:17:45.312 "state": "completed", 00:17:45.312 "digest": "sha512", 00:17:45.312 "dhgroup": "ffdhe2048" 00:17:45.312 } 00:17:45.312 } 00:17:45.312 ]' 00:17:45.312 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.312 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.312 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.312 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:45.312 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.312 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.312 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.312 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.573 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:45.573 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:46.145 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.145 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:46.145 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.145 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.145 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.145 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.145 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.145 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.406 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:46.406 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.406 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.406 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:46.406 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:46.406 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.406 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.406 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.406 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.406 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.406 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.406 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.406 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.665 00:17:46.665 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.665 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.665 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.927 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.927 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.927 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.927 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.927 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.927 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.927 { 00:17:46.927 "cntlid": 107, 00:17:46.927 "qid": 0, 00:17:46.927 "state": "enabled", 00:17:46.927 "thread": "nvmf_tgt_poll_group_000", 00:17:46.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:46.927 "listen_address": { 00:17:46.927 "trtype": "TCP", 00:17:46.927 "adrfam": "IPv4", 00:17:46.927 "traddr": "10.0.0.2", 00:17:46.927 "trsvcid": "4420" 00:17:46.927 }, 00:17:46.927 "peer_address": { 00:17:46.927 "trtype": "TCP", 00:17:46.927 "adrfam": "IPv4", 00:17:46.927 "traddr": "10.0.0.1", 00:17:46.927 "trsvcid": "41108" 00:17:46.927 }, 00:17:46.927 "auth": { 00:17:46.927 "state": "completed", 00:17:46.927 "digest": "sha512", 00:17:46.927 "dhgroup": "ffdhe2048" 00:17:46.927 } 00:17:46.927 } 00:17:46.927 ]' 00:17:46.927 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.927 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.927 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.927 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.927 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.927 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.927 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.927 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.187 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:47.187 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:47.758 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.758 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:47.758 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.758 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.758 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.758 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.758 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.758 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.017 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:48.017 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.017 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.017 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:48.017 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:48.017 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.017 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.017 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.017 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.017 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.018 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.018 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.018 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.278 00:17:48.278 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.278 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.278 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.539 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.539 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.539 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.539 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.539 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.539 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.539 { 00:17:48.539 "cntlid": 109, 00:17:48.539 "qid": 0, 00:17:48.539 "state": "enabled", 00:17:48.539 "thread": "nvmf_tgt_poll_group_000", 00:17:48.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:48.539 "listen_address": { 00:17:48.539 "trtype": "TCP", 00:17:48.539 "adrfam": "IPv4", 00:17:48.539 "traddr": "10.0.0.2", 00:17:48.539 "trsvcid": "4420" 00:17:48.539 }, 00:17:48.539 "peer_address": { 00:17:48.539 "trtype": "TCP", 00:17:48.539 "adrfam": "IPv4", 00:17:48.539 "traddr": "10.0.0.1", 00:17:48.539 "trsvcid": "41126" 00:17:48.539 }, 00:17:48.539 "auth": { 00:17:48.539 "state": "completed", 00:17:48.539 "digest": "sha512", 00:17:48.539 "dhgroup": "ffdhe2048" 00:17:48.539 } 00:17:48.539 } 00:17:48.539 ]' 00:17:48.539 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.539 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.539 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.539 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.539 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.539 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.539 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.539 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.799 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:48.799 07:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:49.371 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.371 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:49.371 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.371 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.371 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.371 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.371 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.371 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.631 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:49.631 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.631 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.631 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:49.631 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.631 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.631 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:49.631 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.631 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.631 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.631 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.631 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.631 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.891 00:17:49.891 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.891 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.891 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.151 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.151 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.151 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.151 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.151 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.151 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.151 { 00:17:50.151 "cntlid": 111, 00:17:50.151 "qid": 0, 00:17:50.151 "state": "enabled", 00:17:50.151 "thread": "nvmf_tgt_poll_group_000", 00:17:50.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:50.151 "listen_address": { 00:17:50.151 "trtype": "TCP", 00:17:50.151 "adrfam": "IPv4", 00:17:50.151 "traddr": "10.0.0.2", 00:17:50.151 "trsvcid": "4420" 00:17:50.151 }, 00:17:50.151 "peer_address": { 00:17:50.151 "trtype": "TCP", 00:17:50.151 "adrfam": "IPv4", 00:17:50.151 "traddr": "10.0.0.1", 00:17:50.151 "trsvcid": "41162" 00:17:50.151 }, 00:17:50.151 "auth": { 00:17:50.151 "state": "completed", 00:17:50.151 "digest": "sha512", 00:17:50.151 "dhgroup": "ffdhe2048" 00:17:50.151 } 00:17:50.151 } 00:17:50.151 ]' 00:17:50.151 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.151 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.151 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.151 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.151 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.151 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.151 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.151 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.412 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:50.412 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:50.984 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.984 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:50.984 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.984 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.984 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.984 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.984 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.984 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:50.984 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:51.244 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:51.244 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.244 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.244 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:51.244 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:51.244 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.244 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.244 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.244 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.244 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.244 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.244 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.244 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.504 00:17:51.504 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.504 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.504 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.504 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.504 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.504 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.504 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.765 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.765 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.765 { 00:17:51.765 "cntlid": 113, 00:17:51.765 "qid": 0, 00:17:51.765 "state": "enabled", 00:17:51.765 "thread": "nvmf_tgt_poll_group_000", 00:17:51.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:51.765 "listen_address": { 00:17:51.765 "trtype": "TCP", 00:17:51.765 "adrfam": "IPv4", 00:17:51.765 "traddr": "10.0.0.2", 00:17:51.765 "trsvcid": "4420" 00:17:51.765 }, 00:17:51.765 "peer_address": { 00:17:51.765 "trtype": "TCP", 00:17:51.765 "adrfam": "IPv4", 00:17:51.765 "traddr": "10.0.0.1", 00:17:51.765 "trsvcid": "41196" 00:17:51.765 }, 00:17:51.765 "auth": { 00:17:51.765 "state": "completed", 00:17:51.765 "digest": "sha512", 00:17:51.765 "dhgroup": "ffdhe3072" 00:17:51.765 } 00:17:51.765 } 00:17:51.765 ]' 00:17:51.765 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.765 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.765 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.765 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:51.765 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.765 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.765 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.765 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.026 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:52.026 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:52.598 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.598 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:52.598 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.598 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.598 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.598 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.598 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:52.598 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:52.860 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:52.860 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.860 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.860 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:52.860 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:52.860 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.860 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.860 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.860 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.860 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.860 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.860 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.860 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.120 00:17:53.120 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.120 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.120 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.381 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.381 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.381 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.381 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.381 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.381 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.381 { 00:17:53.381 "cntlid": 115, 00:17:53.381 "qid": 0, 00:17:53.381 "state": "enabled", 00:17:53.381 "thread": "nvmf_tgt_poll_group_000", 00:17:53.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:53.381 "listen_address": { 00:17:53.381 "trtype": "TCP", 00:17:53.381 "adrfam": "IPv4", 00:17:53.381 "traddr": "10.0.0.2", 00:17:53.381 "trsvcid": "4420" 00:17:53.381 }, 00:17:53.381 "peer_address": { 00:17:53.381 "trtype": "TCP", 00:17:53.381 "adrfam": "IPv4", 00:17:53.381 "traddr": "10.0.0.1", 00:17:53.381 "trsvcid": "41234" 00:17:53.381 }, 00:17:53.381 "auth": { 00:17:53.381 "state": "completed", 00:17:53.381 "digest": "sha512", 00:17:53.381 "dhgroup": "ffdhe3072" 00:17:53.381 } 00:17:53.381 } 00:17:53.381 ]' 00:17:53.381 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.381 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.381 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.381 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:53.381 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.381 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.381 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.381 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.642 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:53.642 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:17:54.212 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.212 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:54.212 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.212 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.212 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.212 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.212 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.212 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.473 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:54.473 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.473 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.473 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:54.473 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.473 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.473 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.473 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.473 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.473 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.473 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.473 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.473 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.733 00:17:54.733 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.733 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.733 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.993 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.993 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.993 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.993 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.993 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.993 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.993 { 00:17:54.993 "cntlid": 117, 00:17:54.993 "qid": 0, 00:17:54.993 "state": "enabled", 00:17:54.993 "thread": "nvmf_tgt_poll_group_000", 00:17:54.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:54.993 "listen_address": { 00:17:54.993 "trtype": "TCP", 00:17:54.993 "adrfam": "IPv4", 00:17:54.993 "traddr": "10.0.0.2", 00:17:54.993 "trsvcid": "4420" 00:17:54.993 }, 00:17:54.993 "peer_address": { 00:17:54.993 "trtype": "TCP", 00:17:54.993 "adrfam": "IPv4", 00:17:54.993 "traddr": "10.0.0.1", 00:17:54.993 "trsvcid": "41264" 00:17:54.993 }, 00:17:54.993 "auth": { 00:17:54.993 "state": "completed", 00:17:54.993 "digest": "sha512", 00:17:54.993 "dhgroup": "ffdhe3072" 00:17:54.993 } 00:17:54.993 } 00:17:54.993 ]' 00:17:54.993 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.993 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.993 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.993 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.993 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.993 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.993 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.993 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.253 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:55.253 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:17:55.823 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.823 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:55.823 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.823 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.823 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.823 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.823 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:55.823 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.084 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:56.084 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.085 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.085 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:56.085 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:56.085 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.085 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:56.085 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.085 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.085 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.085 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:56.085 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.085 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.345 00:17:56.345 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.345 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.345 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.605 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.605 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.605 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.605 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.605 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.605 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.605 { 00:17:56.605 "cntlid": 119, 00:17:56.605 "qid": 0, 00:17:56.605 "state": "enabled", 00:17:56.605 "thread": "nvmf_tgt_poll_group_000", 00:17:56.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:56.605 "listen_address": { 00:17:56.605 "trtype": "TCP", 00:17:56.605 "adrfam": "IPv4", 00:17:56.606 "traddr": "10.0.0.2", 00:17:56.606 "trsvcid": "4420" 00:17:56.606 }, 00:17:56.606 "peer_address": { 00:17:56.606 "trtype": "TCP", 00:17:56.606 "adrfam": "IPv4", 00:17:56.606 "traddr": "10.0.0.1", 00:17:56.606 "trsvcid": "57598" 00:17:56.606 }, 00:17:56.606 "auth": { 00:17:56.606 "state": "completed", 00:17:56.606 "digest": "sha512", 00:17:56.606 "dhgroup": "ffdhe3072" 00:17:56.606 } 00:17:56.606 } 00:17:56.606 ]' 00:17:56.606 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.606 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.606 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.606 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.606 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.606 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.606 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.606 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.866 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:56.866 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:17:57.436 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.436 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:57.436 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.436 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.436 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.436 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.436 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.436 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.436 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.697 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:57.697 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.697 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.697 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:57.697 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:57.697 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.697 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.697 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.697 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.697 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.697 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.697 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.697 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.957 00:17:57.957 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.957 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.957 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.217 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.217 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.217 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.217 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.217 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.217 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.217 { 00:17:58.217 "cntlid": 121, 00:17:58.217 "qid": 0, 00:17:58.217 "state": "enabled", 00:17:58.217 "thread": "nvmf_tgt_poll_group_000", 00:17:58.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:58.217 "listen_address": { 00:17:58.217 "trtype": "TCP", 00:17:58.217 "adrfam": "IPv4", 00:17:58.217 "traddr": "10.0.0.2", 00:17:58.217 "trsvcid": "4420" 00:17:58.217 }, 00:17:58.217 "peer_address": { 00:17:58.217 "trtype": "TCP", 00:17:58.217 "adrfam": "IPv4", 00:17:58.217 "traddr": "10.0.0.1", 00:17:58.217 "trsvcid": "57624" 00:17:58.217 }, 00:17:58.217 "auth": { 00:17:58.217 "state": "completed", 00:17:58.217 "digest": "sha512", 00:17:58.217 "dhgroup": "ffdhe4096" 00:17:58.217 } 00:17:58.217 } 00:17:58.217 ]' 00:17:58.217 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.217 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.217 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.217 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:58.217 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.217 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.217 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.217 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.478 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:58.478 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:17:59.048 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.048 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:59.048 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.048 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.048 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.048 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.048 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.048 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.315 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:59.315 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.315 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.315 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:59.315 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:59.315 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.315 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.315 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.315 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.315 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.315 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.315 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.315 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.616 00:17:59.616 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.616 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.616 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.931 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.931 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.931 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.931 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.931 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.931 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.931 { 00:17:59.931 "cntlid": 123, 00:17:59.931 "qid": 0, 00:17:59.931 "state": "enabled", 00:17:59.931 "thread": "nvmf_tgt_poll_group_000", 00:17:59.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:59.931 "listen_address": { 00:17:59.931 "trtype": "TCP", 00:17:59.931 "adrfam": "IPv4", 00:17:59.931 "traddr": "10.0.0.2", 00:17:59.931 "trsvcid": "4420" 00:17:59.931 }, 00:17:59.931 "peer_address": { 00:17:59.931 "trtype": "TCP", 00:17:59.931 "adrfam": "IPv4", 00:17:59.931 "traddr": "10.0.0.1", 00:17:59.931 "trsvcid": "57652" 00:17:59.931 }, 00:17:59.931 "auth": { 00:17:59.931 "state": "completed", 00:17:59.931 "digest": "sha512", 00:17:59.931 "dhgroup": "ffdhe4096" 00:17:59.931 } 00:17:59.931 } 00:17:59.931 ]' 00:17:59.931 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.931 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.931 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.931 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.931 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.931 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.931 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.931 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.192 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:18:00.192 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:18:00.762 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.762 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:00.762 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.762 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.762 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.762 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.762 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:00.762 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.022 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:01.022 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.022 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.022 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:01.022 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:01.022 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.022 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.022 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.022 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.022 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.022 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.022 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.023 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.283 00:18:01.283 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.283 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.283 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.543 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.543 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.543 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.543 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.543 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.543 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.543 { 00:18:01.543 "cntlid": 125, 00:18:01.543 "qid": 0, 00:18:01.543 "state": "enabled", 00:18:01.543 "thread": "nvmf_tgt_poll_group_000", 00:18:01.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:01.543 "listen_address": { 00:18:01.543 "trtype": "TCP", 00:18:01.543 "adrfam": "IPv4", 00:18:01.543 "traddr": "10.0.0.2", 00:18:01.543 "trsvcid": "4420" 00:18:01.543 }, 00:18:01.543 "peer_address": { 00:18:01.543 "trtype": "TCP", 00:18:01.543 "adrfam": "IPv4", 00:18:01.543 "traddr": "10.0.0.1", 00:18:01.543 "trsvcid": "57688" 00:18:01.543 }, 00:18:01.543 "auth": { 00:18:01.543 "state": "completed", 00:18:01.543 "digest": "sha512", 00:18:01.543 "dhgroup": "ffdhe4096" 00:18:01.543 } 00:18:01.543 } 00:18:01.543 ]' 00:18:01.543 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.543 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.543 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.543 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:01.543 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.543 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.543 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.543 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.803 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:18:01.803 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:18:02.373 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.373 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:02.373 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.373 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.373 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.373 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.373 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.373 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.633 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:02.633 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.633 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.633 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:02.633 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:02.633 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.633 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:02.633 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.633 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.633 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.633 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:02.633 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.634 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.894 00:18:02.894 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.894 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.894 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.155 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.155 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.155 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.155 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.155 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.155 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.155 { 00:18:03.155 "cntlid": 127, 00:18:03.155 "qid": 0, 00:18:03.155 "state": "enabled", 00:18:03.155 "thread": "nvmf_tgt_poll_group_000", 00:18:03.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:03.155 "listen_address": { 00:18:03.155 "trtype": "TCP", 00:18:03.155 "adrfam": "IPv4", 00:18:03.155 "traddr": "10.0.0.2", 00:18:03.155 "trsvcid": "4420" 00:18:03.155 }, 00:18:03.155 "peer_address": { 00:18:03.155 "trtype": "TCP", 00:18:03.155 "adrfam": "IPv4", 00:18:03.155 "traddr": "10.0.0.1", 00:18:03.155 "trsvcid": "57716" 00:18:03.155 }, 00:18:03.155 "auth": { 00:18:03.155 "state": "completed", 00:18:03.155 "digest": "sha512", 00:18:03.155 "dhgroup": "ffdhe4096" 00:18:03.155 } 00:18:03.155 } 00:18:03.155 ]' 00:18:03.155 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.155 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.155 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.155 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.155 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.155 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.155 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.155 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.415 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:18:03.415 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:18:03.987 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.987 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:03.987 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.987 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.987 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.987 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.987 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.987 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:03.987 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.248 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:04.248 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.248 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.248 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:04.248 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:04.248 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.248 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.248 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.248 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.248 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.248 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.248 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.248 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.509 00:18:04.509 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.509 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.509 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.770 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.770 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.770 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.770 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.770 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.770 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.770 { 00:18:04.770 "cntlid": 129, 00:18:04.770 "qid": 0, 00:18:04.770 "state": "enabled", 00:18:04.770 "thread": "nvmf_tgt_poll_group_000", 00:18:04.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:04.770 "listen_address": { 00:18:04.770 "trtype": "TCP", 00:18:04.770 "adrfam": "IPv4", 00:18:04.770 "traddr": "10.0.0.2", 00:18:04.770 "trsvcid": "4420" 00:18:04.770 }, 00:18:04.770 "peer_address": { 00:18:04.770 "trtype": "TCP", 00:18:04.770 "adrfam": "IPv4", 00:18:04.770 "traddr": "10.0.0.1", 00:18:04.770 "trsvcid": "57752" 00:18:04.770 }, 00:18:04.770 "auth": { 00:18:04.770 "state": "completed", 00:18:04.770 "digest": "sha512", 00:18:04.770 "dhgroup": "ffdhe6144" 00:18:04.770 } 00:18:04.770 } 00:18:04.770 ]' 00:18:04.770 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.771 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.771 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.771 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.771 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.032 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.032 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.032 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.032 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:18:05.032 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:18:05.605 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.865 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:05.865 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.865 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.865 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.865 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.865 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.865 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.865 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:05.865 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.865 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.865 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:05.865 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.865 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.865 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.865 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.865 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.865 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.865 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.865 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.865 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.126 00:18:06.387 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.387 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.387 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.387 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.387 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.387 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.387 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.387 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.387 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.387 { 00:18:06.387 "cntlid": 131, 00:18:06.387 "qid": 0, 00:18:06.387 "state": "enabled", 00:18:06.387 "thread": "nvmf_tgt_poll_group_000", 00:18:06.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:06.387 "listen_address": { 00:18:06.387 "trtype": "TCP", 00:18:06.387 "adrfam": "IPv4", 00:18:06.387 "traddr": "10.0.0.2", 00:18:06.387 "trsvcid": "4420" 00:18:06.387 }, 00:18:06.387 "peer_address": { 00:18:06.387 "trtype": "TCP", 00:18:06.387 "adrfam": "IPv4", 00:18:06.387 "traddr": "10.0.0.1", 00:18:06.387 "trsvcid": "42800" 00:18:06.387 }, 00:18:06.387 "auth": { 00:18:06.387 "state": "completed", 00:18:06.387 "digest": "sha512", 00:18:06.387 "dhgroup": "ffdhe6144" 00:18:06.387 } 00:18:06.387 } 00:18:06.387 ]' 00:18:06.387 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.387 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.387 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.647 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.647 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.647 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.647 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.647 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.647 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:18:06.647 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.586 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.846 00:18:08.106 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.106 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.106 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.106 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.106 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.106 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.106 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.106 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.106 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.106 { 00:18:08.106 "cntlid": 133, 00:18:08.106 "qid": 0, 00:18:08.106 "state": "enabled", 00:18:08.106 "thread": "nvmf_tgt_poll_group_000", 00:18:08.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:08.106 "listen_address": { 00:18:08.106 "trtype": "TCP", 00:18:08.106 "adrfam": "IPv4", 00:18:08.106 "traddr": "10.0.0.2", 00:18:08.106 "trsvcid": "4420" 00:18:08.106 }, 00:18:08.106 "peer_address": { 00:18:08.106 "trtype": "TCP", 00:18:08.106 "adrfam": "IPv4", 00:18:08.106 "traddr": "10.0.0.1", 00:18:08.106 "trsvcid": "42826" 00:18:08.106 }, 00:18:08.106 "auth": { 00:18:08.106 "state": "completed", 00:18:08.106 "digest": "sha512", 00:18:08.106 "dhgroup": "ffdhe6144" 00:18:08.106 } 00:18:08.106 } 00:18:08.106 ]' 00:18:08.106 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.106 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.106 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.366 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.366 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.366 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.366 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.366 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.366 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:18:08.366 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.302 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.561 00:18:09.820 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.820 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.820 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.820 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.820 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.820 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.820 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.820 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.820 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.820 { 00:18:09.820 "cntlid": 135, 00:18:09.820 "qid": 0, 00:18:09.820 "state": "enabled", 00:18:09.820 "thread": "nvmf_tgt_poll_group_000", 00:18:09.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:09.820 "listen_address": { 00:18:09.820 "trtype": "TCP", 00:18:09.820 "adrfam": "IPv4", 00:18:09.820 "traddr": "10.0.0.2", 00:18:09.820 "trsvcid": "4420" 00:18:09.820 }, 00:18:09.820 "peer_address": { 00:18:09.820 "trtype": "TCP", 00:18:09.821 "adrfam": "IPv4", 00:18:09.821 "traddr": "10.0.0.1", 00:18:09.821 "trsvcid": "42864" 00:18:09.821 }, 00:18:09.821 "auth": { 00:18:09.821 "state": "completed", 00:18:09.821 "digest": "sha512", 00:18:09.821 "dhgroup": "ffdhe6144" 00:18:09.821 } 00:18:09.821 } 00:18:09.821 ]' 00:18:09.821 07:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.821 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.821 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.080 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.080 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.080 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.080 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.080 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.339 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:18:10.339 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:18:10.907 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.907 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:10.907 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.907 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.907 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.907 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.907 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.907 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:10.907 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:11.174 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:11.174 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.174 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.174 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:11.174 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:11.174 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.174 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.174 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.174 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.174 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.174 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.174 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.174 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.440 00:18:11.440 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.440 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.440 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.700 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.700 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.700 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.700 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.701 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.701 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.701 { 00:18:11.701 "cntlid": 137, 00:18:11.701 "qid": 0, 00:18:11.701 "state": "enabled", 00:18:11.701 "thread": "nvmf_tgt_poll_group_000", 00:18:11.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:11.701 "listen_address": { 00:18:11.701 "trtype": "TCP", 00:18:11.701 "adrfam": "IPv4", 00:18:11.701 "traddr": "10.0.0.2", 00:18:11.701 "trsvcid": "4420" 00:18:11.701 }, 00:18:11.701 "peer_address": { 00:18:11.701 "trtype": "TCP", 00:18:11.701 "adrfam": "IPv4", 00:18:11.701 "traddr": "10.0.0.1", 00:18:11.701 "trsvcid": "42902" 00:18:11.701 }, 00:18:11.701 "auth": { 00:18:11.701 "state": "completed", 00:18:11.701 "digest": "sha512", 00:18:11.701 "dhgroup": "ffdhe8192" 00:18:11.701 } 00:18:11.701 } 00:18:11.701 ]' 00:18:11.701 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.701 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.701 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.701 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.701 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.960 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.960 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.960 07:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.960 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:18:11.960 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.901 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.472 00:18:13.472 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.472 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.472 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.472 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.472 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.472 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.472 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.472 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.472 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.472 { 00:18:13.472 "cntlid": 139, 00:18:13.472 "qid": 0, 00:18:13.473 "state": "enabled", 00:18:13.473 "thread": "nvmf_tgt_poll_group_000", 00:18:13.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:13.473 "listen_address": { 00:18:13.473 "trtype": "TCP", 00:18:13.473 "adrfam": "IPv4", 00:18:13.473 "traddr": "10.0.0.2", 00:18:13.473 "trsvcid": "4420" 00:18:13.473 }, 00:18:13.473 "peer_address": { 00:18:13.473 "trtype": "TCP", 00:18:13.473 "adrfam": "IPv4", 00:18:13.473 "traddr": "10.0.0.1", 00:18:13.473 "trsvcid": "42928" 00:18:13.473 }, 00:18:13.473 "auth": { 00:18:13.473 "state": "completed", 00:18:13.473 "digest": "sha512", 00:18:13.473 "dhgroup": "ffdhe8192" 00:18:13.473 } 00:18:13.473 } 00:18:13.473 ]' 00:18:13.473 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.733 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.733 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.733 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.733 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.733 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.733 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.733 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.994 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:18:13.994 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: --dhchap-ctrl-secret DHHC-1:02:YzE3YjIyNDUxYjEzNDY1ZjVmODA5MDljYTA1ODBhNTJkN2RlZDU2YzlhNTQ0NTEwGbd6/A==: 00:18:14.563 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.564 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:14.564 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.564 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.564 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.564 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.564 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.564 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.824 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:14.824 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.824 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.824 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:14.824 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:14.824 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.824 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.824 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.824 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.824 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.824 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.824 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.824 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.086 00:18:15.347 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.347 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.347 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.347 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.347 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.347 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.347 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.347 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.347 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.347 { 00:18:15.347 "cntlid": 141, 00:18:15.347 "qid": 0, 00:18:15.347 "state": "enabled", 00:18:15.347 "thread": "nvmf_tgt_poll_group_000", 00:18:15.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:15.347 "listen_address": { 00:18:15.347 "trtype": "TCP", 00:18:15.347 "adrfam": "IPv4", 00:18:15.347 "traddr": "10.0.0.2", 00:18:15.347 "trsvcid": "4420" 00:18:15.347 }, 00:18:15.347 "peer_address": { 00:18:15.347 "trtype": "TCP", 00:18:15.347 "adrfam": "IPv4", 00:18:15.347 "traddr": "10.0.0.1", 00:18:15.347 "trsvcid": "54230" 00:18:15.347 }, 00:18:15.347 "auth": { 00:18:15.347 "state": "completed", 00:18:15.347 "digest": "sha512", 00:18:15.347 "dhgroup": "ffdhe8192" 00:18:15.347 } 00:18:15.347 } 00:18:15.347 ]' 00:18:15.347 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.347 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.347 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.607 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.607 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.607 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.607 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.607 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.868 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:18:15.868 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:01:ZDFmYTQ1NWQ0MTA2YmVmYTUxMjhiOWJiNjcyMzEwZTNSS2Po: 00:18:16.439 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.439 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:16.439 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.439 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.439 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.439 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.439 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.439 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.700 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:16.700 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.700 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.700 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:16.700 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:16.700 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.700 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:16.700 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.700 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.700 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.700 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:16.700 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.700 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.961 00:18:16.962 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.962 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.962 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.222 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.222 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.222 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.222 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.222 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.222 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.222 { 00:18:17.222 "cntlid": 143, 00:18:17.222 "qid": 0, 00:18:17.222 "state": "enabled", 00:18:17.222 "thread": "nvmf_tgt_poll_group_000", 00:18:17.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:17.222 "listen_address": { 00:18:17.222 "trtype": "TCP", 00:18:17.222 "adrfam": "IPv4", 00:18:17.222 "traddr": "10.0.0.2", 00:18:17.222 "trsvcid": "4420" 00:18:17.222 }, 00:18:17.222 "peer_address": { 00:18:17.222 "trtype": "TCP", 00:18:17.222 "adrfam": "IPv4", 00:18:17.222 "traddr": "10.0.0.1", 00:18:17.222 "trsvcid": "54266" 00:18:17.222 }, 00:18:17.222 "auth": { 00:18:17.222 "state": "completed", 00:18:17.222 "digest": "sha512", 00:18:17.222 "dhgroup": "ffdhe8192" 00:18:17.222 } 00:18:17.222 } 00:18:17.222 ]' 00:18:17.222 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.222 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.222 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.481 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.481 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.481 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.481 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.481 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.481 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:18:17.481 07:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.424 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.996 00:18:18.996 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.996 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.996 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.996 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.996 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.996 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.996 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.255 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.255 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.255 { 00:18:19.255 "cntlid": 145, 00:18:19.255 "qid": 0, 00:18:19.255 "state": "enabled", 00:18:19.255 "thread": "nvmf_tgt_poll_group_000", 00:18:19.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:19.256 "listen_address": { 00:18:19.256 "trtype": "TCP", 00:18:19.256 "adrfam": "IPv4", 00:18:19.256 "traddr": "10.0.0.2", 00:18:19.256 "trsvcid": "4420" 00:18:19.256 }, 00:18:19.256 "peer_address": { 00:18:19.256 "trtype": "TCP", 00:18:19.256 "adrfam": "IPv4", 00:18:19.256 "traddr": "10.0.0.1", 00:18:19.256 "trsvcid": "54288" 00:18:19.256 }, 00:18:19.256 "auth": { 00:18:19.256 "state": "completed", 00:18:19.256 "digest": "sha512", 00:18:19.256 "dhgroup": "ffdhe8192" 00:18:19.256 } 00:18:19.256 } 00:18:19.256 ]' 00:18:19.256 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.256 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.256 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.256 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.256 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.256 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.256 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.256 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.515 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:18:19.516 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YjFlNTY5MjhkMTE3YmM0M2I2M2RjODNjYzE0ZjhmOWU2OTM2ZWYzMzkyZTZiZTUywXsZ1A==: --dhchap-ctrl-secret DHHC-1:03:MzMwOTg0MzU0ZDc5OTU0ZDVhODgwNTYwY2UxZDZhMWE2ZTE4NmE5MTY3Yzc4ZWUzN2Y2OGFkODBkNDc1NTU1N1FHMKw=: 00:18:20.085 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.085 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:20.085 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.085 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.085 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.085 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:18:20.085 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.085 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.085 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.085 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:20.085 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:20.085 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:20.085 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:20.085 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.085 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:20.085 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.086 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:20.086 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:20.086 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:20.655 request: 00:18:20.655 { 00:18:20.655 "name": "nvme0", 00:18:20.655 "trtype": "tcp", 00:18:20.655 "traddr": "10.0.0.2", 00:18:20.655 "adrfam": "ipv4", 00:18:20.655 "trsvcid": "4420", 00:18:20.655 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:20.655 "prchk_reftag": false, 00:18:20.655 "prchk_guard": false, 00:18:20.655 "hdgst": false, 00:18:20.655 "ddgst": false, 00:18:20.655 "dhchap_key": "key2", 00:18:20.655 "allow_unrecognized_csi": false, 00:18:20.655 "method": "bdev_nvme_attach_controller", 00:18:20.655 "req_id": 1 00:18:20.655 } 00:18:20.655 Got JSON-RPC error response 00:18:20.655 response: 00:18:20.655 { 00:18:20.655 "code": -5, 00:18:20.655 "message": "Input/output error" 00:18:20.655 } 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.655 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:20.656 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:20.656 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:20.915 request: 00:18:20.915 { 00:18:20.915 "name": "nvme0", 00:18:20.915 "trtype": "tcp", 00:18:20.915 "traddr": "10.0.0.2", 00:18:20.915 "adrfam": "ipv4", 00:18:20.915 "trsvcid": "4420", 00:18:20.915 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:20.915 "prchk_reftag": false, 00:18:20.915 "prchk_guard": false, 00:18:20.915 "hdgst": false, 00:18:20.915 "ddgst": false, 00:18:20.915 "dhchap_key": "key1", 00:18:20.915 "dhchap_ctrlr_key": "ckey2", 00:18:20.915 "allow_unrecognized_csi": false, 00:18:20.915 "method": "bdev_nvme_attach_controller", 00:18:20.915 "req_id": 1 00:18:20.915 } 00:18:20.915 Got JSON-RPC error response 00:18:20.915 response: 00:18:20.915 { 00:18:20.915 "code": -5, 00:18:20.915 "message": "Input/output error" 00:18:20.915 } 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.176 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.435 request: 00:18:21.435 { 00:18:21.435 "name": "nvme0", 00:18:21.435 "trtype": "tcp", 00:18:21.435 "traddr": "10.0.0.2", 00:18:21.435 "adrfam": "ipv4", 00:18:21.435 "trsvcid": "4420", 00:18:21.435 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:21.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:21.435 "prchk_reftag": false, 00:18:21.435 "prchk_guard": false, 00:18:21.435 "hdgst": false, 00:18:21.435 "ddgst": false, 00:18:21.435 "dhchap_key": "key1", 00:18:21.435 "dhchap_ctrlr_key": "ckey1", 00:18:21.435 "allow_unrecognized_csi": false, 00:18:21.435 "method": "bdev_nvme_attach_controller", 00:18:21.435 "req_id": 1 00:18:21.435 } 00:18:21.435 Got JSON-RPC error response 00:18:21.435 response: 00:18:21.435 { 00:18:21.435 "code": -5, 00:18:21.435 "message": "Input/output error" 00:18:21.435 } 00:18:21.435 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:21.436 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.436 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.436 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.436 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:21.436 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.436 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.436 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.436 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3370129 00:18:21.436 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3370129 ']' 00:18:21.436 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3370129 00:18:21.436 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:21.436 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:21.436 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3370129 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3370129' 00:18:21.696 killing process with pid 3370129 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3370129 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3370129 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3395693 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3395693 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3395693 ']' 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:21.696 07:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3395693 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3395693 ']' 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.635 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.895 null0 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ogh 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.6Ic ]] 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6Ic 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.WEG 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Xim ]] 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Xim 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yNu 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.895 07:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.EWu ]] 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EWu 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Xne 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.895 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.833 nvme0n1 00:18:23.833 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.833 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.833 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.833 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.833 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.833 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.833 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.833 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.833 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.833 { 00:18:23.833 "cntlid": 1, 00:18:23.833 "qid": 0, 00:18:23.833 "state": "enabled", 00:18:23.833 "thread": "nvmf_tgt_poll_group_000", 00:18:23.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:23.833 "listen_address": { 00:18:23.833 "trtype": "TCP", 00:18:23.833 "adrfam": "IPv4", 00:18:23.833 "traddr": "10.0.0.2", 00:18:23.833 "trsvcid": "4420" 00:18:23.833 }, 00:18:23.833 "peer_address": { 00:18:23.833 "trtype": "TCP", 00:18:23.833 "adrfam": "IPv4", 00:18:23.833 "traddr": "10.0.0.1", 00:18:23.833 "trsvcid": "54344" 00:18:23.833 }, 00:18:23.833 "auth": { 00:18:23.833 "state": "completed", 00:18:23.833 "digest": "sha512", 00:18:23.833 "dhgroup": "ffdhe8192" 00:18:23.833 } 00:18:23.833 } 00:18:23.833 ]' 00:18:23.833 07:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.833 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.833 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.094 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.094 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.094 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.094 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.094 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.094 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:18:24.094 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:18:25.032 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.032 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:25.032 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.032 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.032 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.032 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:25.032 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.032 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.032 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.032 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:25.032 07:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:25.032 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:25.032 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:25.032 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:25.032 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:25.032 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.032 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:25.032 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.032 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:25.032 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.032 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.292 request: 00:18:25.292 { 00:18:25.292 "name": "nvme0", 00:18:25.292 "trtype": "tcp", 00:18:25.292 "traddr": "10.0.0.2", 00:18:25.292 "adrfam": "ipv4", 00:18:25.292 "trsvcid": "4420", 00:18:25.292 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:25.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:25.292 "prchk_reftag": false, 00:18:25.292 "prchk_guard": false, 00:18:25.292 "hdgst": false, 00:18:25.292 "ddgst": false, 00:18:25.292 "dhchap_key": "key3", 00:18:25.292 "allow_unrecognized_csi": false, 00:18:25.292 "method": "bdev_nvme_attach_controller", 00:18:25.292 "req_id": 1 00:18:25.292 } 00:18:25.292 Got JSON-RPC error response 00:18:25.292 response: 00:18:25.292 { 00:18:25.292 "code": -5, 00:18:25.292 "message": "Input/output error" 00:18:25.292 } 00:18:25.292 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:25.292 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:25.292 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:25.292 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:25.292 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:25.292 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:25.292 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:25.292 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:25.292 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:25.292 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:25.292 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:25.292 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:25.292 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.292 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:25.292 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.292 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:25.293 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.293 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.554 request: 00:18:25.554 { 00:18:25.554 "name": "nvme0", 00:18:25.554 "trtype": "tcp", 00:18:25.554 "traddr": "10.0.0.2", 00:18:25.554 "adrfam": "ipv4", 00:18:25.554 "trsvcid": "4420", 00:18:25.554 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:25.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:25.554 "prchk_reftag": false, 00:18:25.554 "prchk_guard": false, 00:18:25.554 "hdgst": false, 00:18:25.554 "ddgst": false, 00:18:25.554 "dhchap_key": "key3", 00:18:25.554 "allow_unrecognized_csi": false, 00:18:25.554 "method": "bdev_nvme_attach_controller", 00:18:25.554 "req_id": 1 00:18:25.554 } 00:18:25.554 Got JSON-RPC error response 00:18:25.554 response: 00:18:25.554 { 00:18:25.554 "code": -5, 00:18:25.554 "message": "Input/output error" 00:18:25.554 } 00:18:25.554 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:25.554 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:25.554 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:25.554 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:25.554 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:25.554 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:25.554 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:25.554 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.554 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.554 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.815 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:25.815 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.815 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.815 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.815 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:25.815 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.815 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.815 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.815 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:25.815 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:25.815 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:25.815 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:25.815 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.815 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:25.816 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.816 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:25.816 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:25.816 07:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:26.076 request: 00:18:26.076 { 00:18:26.076 "name": "nvme0", 00:18:26.076 "trtype": "tcp", 00:18:26.076 "traddr": "10.0.0.2", 00:18:26.076 "adrfam": "ipv4", 00:18:26.076 "trsvcid": "4420", 00:18:26.076 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:26.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:26.076 "prchk_reftag": false, 00:18:26.076 "prchk_guard": false, 00:18:26.076 "hdgst": false, 00:18:26.076 "ddgst": false, 00:18:26.076 "dhchap_key": "key0", 00:18:26.076 "dhchap_ctrlr_key": "key1", 00:18:26.076 "allow_unrecognized_csi": false, 00:18:26.076 "method": "bdev_nvme_attach_controller", 00:18:26.076 "req_id": 1 00:18:26.076 } 00:18:26.076 Got JSON-RPC error response 00:18:26.076 response: 00:18:26.076 { 00:18:26.076 "code": -5, 00:18:26.076 "message": "Input/output error" 00:18:26.076 } 00:18:26.076 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:26.076 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:26.076 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:26.076 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:26.076 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:26.076 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:26.076 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:26.337 nvme0n1 00:18:26.337 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:26.337 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:26.337 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.597 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.597 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.597 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.597 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:18:26.597 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.597 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.857 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.857 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:26.857 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:26.857 07:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:27.427 nvme0n1 00:18:27.427 07:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:27.427 07:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:27.427 07:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.687 07:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.687 07:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:27.687 07:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.687 07:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.687 07:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.687 07:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:27.687 07:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:27.687 07:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.947 07:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.947 07:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:18:27.947 07:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: --dhchap-ctrl-secret DHHC-1:03:ODUxMzRhY2RlNGMwMTdlYzkyYzIyZWQ3NGM5YjJmM2MwMzM4YWUyNGNhOWEwNjI4ZTQ2YzI3ZmU5ZTY1Mjg3NY9TnkE=: 00:18:28.517 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:28.517 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:28.517 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:28.517 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:28.517 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:28.517 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:28.517 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:28.517 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.517 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.778 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:28.778 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:28.778 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:28.778 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:28.778 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.778 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:28.778 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.778 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:28.778 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:28.778 07:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:29.039 request: 00:18:29.039 { 00:18:29.039 "name": "nvme0", 00:18:29.039 "trtype": "tcp", 00:18:29.039 "traddr": "10.0.0.2", 00:18:29.039 "adrfam": "ipv4", 00:18:29.039 "trsvcid": "4420", 00:18:29.039 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:29.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:29.039 "prchk_reftag": false, 00:18:29.039 "prchk_guard": false, 00:18:29.039 "hdgst": false, 00:18:29.039 "ddgst": false, 00:18:29.039 "dhchap_key": "key1", 00:18:29.039 "allow_unrecognized_csi": false, 00:18:29.039 "method": "bdev_nvme_attach_controller", 00:18:29.039 "req_id": 1 00:18:29.039 } 00:18:29.039 Got JSON-RPC error response 00:18:29.039 response: 00:18:29.039 { 00:18:29.039 "code": -5, 00:18:29.039 "message": "Input/output error" 00:18:29.039 } 00:18:29.039 07:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:29.039 07:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:29.039 07:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:29.039 07:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:29.039 07:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:29.039 07:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:29.039 07:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:29.978 nvme0n1 00:18:29.978 07:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:29.978 07:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:29.978 07:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.978 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.978 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.978 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.237 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:30.237 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.238 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.238 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.238 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:30.238 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:30.238 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:30.497 nvme0n1 00:18:30.497 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:30.497 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:30.497 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.757 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.757 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.757 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.757 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:30.757 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.757 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.018 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.018 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: '' 2s 00:18:31.018 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:31.018 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:31.018 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: 00:18:31.018 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:31.018 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:31.018 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:31.018 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: ]] 00:18:31.018 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZGM5MTAyYjgzY2UzMTUyOGVhZGIxYjk3MjI5OWZmYzUD+ZAO: 00:18:31.018 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:31.018 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:31.018 07:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:32.927 07:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:32.927 07:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:32.927 07:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:32.927 07:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:32.927 07:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:32.927 07:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:32.927 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:32.927 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:32.927 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.927 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.927 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.927 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: 2s 00:18:32.927 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:32.927 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:32.927 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:32.927 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: 00:18:32.927 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:32.927 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:32.927 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:32.927 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: ]] 00:18:32.927 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MjdkOTBjNjIxNTNhY2RjNTcwZmI1MzMyNDU1Zjk5MmE5YzFlMjg1MGVmOTE1NDVj78EgMA==: 00:18:32.927 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:32.927 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:34.835 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:34.835 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:34.835 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:34.835 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:35.094 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:35.094 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:35.094 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:35.094 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.094 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:35.094 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.094 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.094 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.094 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:35.095 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:35.095 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:35.664 nvme0n1 00:18:35.664 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:35.664 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.664 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.664 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.664 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:35.664 07:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:36.234 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:36.235 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:36.235 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.495 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.495 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:36.495 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.495 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.495 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.495 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:36.495 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:36.495 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:36.495 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:36.495 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.756 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.756 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:36.756 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.756 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.756 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.756 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:36.756 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:36.756 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:36.756 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:36.756 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.756 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:36.756 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.756 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:36.757 07:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:37.325 request: 00:18:37.325 { 00:18:37.325 "name": "nvme0", 00:18:37.325 "dhchap_key": "key1", 00:18:37.325 "dhchap_ctrlr_key": "key3", 00:18:37.325 "method": "bdev_nvme_set_keys", 00:18:37.325 "req_id": 1 00:18:37.325 } 00:18:37.325 Got JSON-RPC error response 00:18:37.325 response: 00:18:37.325 { 00:18:37.326 "code": -13, 00:18:37.326 "message": "Permission denied" 00:18:37.326 } 00:18:37.326 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:37.326 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:37.326 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:37.326 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:37.326 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:37.326 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:37.326 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.326 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:37.326 07:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:38.708 07:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:38.708 07:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:38.708 07:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.708 07:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:38.708 07:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:38.708 07:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.708 07:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.708 07:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.708 07:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:38.708 07:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:38.708 07:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:39.279 nvme0n1 00:18:39.279 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:39.279 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.279 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.539 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.539 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.539 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:39.539 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.539 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:39.539 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.539 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:39.539 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.539 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.539 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:39.800 request: 00:18:39.800 { 00:18:39.800 "name": "nvme0", 00:18:39.800 "dhchap_key": "key2", 00:18:39.800 "dhchap_ctrlr_key": "key0", 00:18:39.800 "method": "bdev_nvme_set_keys", 00:18:39.800 "req_id": 1 00:18:39.800 } 00:18:39.800 Got JSON-RPC error response 00:18:39.800 response: 00:18:39.800 { 00:18:39.800 "code": -13, 00:18:39.800 "message": "Permission denied" 00:18:39.800 } 00:18:39.800 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:39.800 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.800 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.800 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.800 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:39.800 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.800 07:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:40.078 07:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:40.078 07:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:41.095 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:41.095 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:41.095 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3370297 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3370297 ']' 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3370297 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3370297 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3370297' 00:18:41.355 killing process with pid 3370297 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3370297 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3370297 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:41.355 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:41.615 rmmod nvme_tcp 00:18:41.615 rmmod nvme_fabrics 00:18:41.615 rmmod nvme_keyring 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3395693 ']' 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3395693 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3395693 ']' 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3395693 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3395693 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3395693' 00:18:41.615 killing process with pid 3395693 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3395693 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3395693 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.615 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.157 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:44.157 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Ogh /tmp/spdk.key-sha256.WEG /tmp/spdk.key-sha384.yNu /tmp/spdk.key-sha512.Xne /tmp/spdk.key-sha512.6Ic /tmp/spdk.key-sha384.Xim /tmp/spdk.key-sha256.EWu '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:44.157 00:18:44.157 real 2m37.062s 00:18:44.157 user 5m52.913s 00:18:44.157 sys 0m24.969s 00:18:44.157 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:44.157 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.157 ************************************ 00:18:44.157 END TEST nvmf_auth_target 00:18:44.157 ************************************ 00:18:44.157 07:33:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:44.157 07:33:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:44.157 07:33:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:44.157 07:33:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:44.157 07:33:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:44.157 ************************************ 00:18:44.157 START TEST nvmf_bdevio_no_huge 00:18:44.157 ************************************ 00:18:44.157 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:44.158 * Looking for test storage... 00:18:44.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:44.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.158 --rc genhtml_branch_coverage=1 00:18:44.158 --rc genhtml_function_coverage=1 00:18:44.158 --rc genhtml_legend=1 00:18:44.158 --rc geninfo_all_blocks=1 00:18:44.158 --rc geninfo_unexecuted_blocks=1 00:18:44.158 00:18:44.158 ' 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:44.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.158 --rc genhtml_branch_coverage=1 00:18:44.158 --rc genhtml_function_coverage=1 00:18:44.158 --rc genhtml_legend=1 00:18:44.158 --rc geninfo_all_blocks=1 00:18:44.158 --rc geninfo_unexecuted_blocks=1 00:18:44.158 00:18:44.158 ' 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:44.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.158 --rc genhtml_branch_coverage=1 00:18:44.158 --rc genhtml_function_coverage=1 00:18:44.158 --rc genhtml_legend=1 00:18:44.158 --rc geninfo_all_blocks=1 00:18:44.158 --rc geninfo_unexecuted_blocks=1 00:18:44.158 00:18:44.158 ' 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:44.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.158 --rc genhtml_branch_coverage=1 00:18:44.158 --rc genhtml_function_coverage=1 00:18:44.158 --rc genhtml_legend=1 00:18:44.158 --rc geninfo_all_blocks=1 00:18:44.158 --rc geninfo_unexecuted_blocks=1 00:18:44.158 00:18:44.158 ' 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.158 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:44.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:44.159 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:52.299 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:52.299 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:52.299 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:52.300 Found net devices under 0000:31:00.0: cvl_0_0 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:52.300 Found net devices under 0000:31:00.1: cvl_0_1 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:52.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:18:52.300 00:18:52.300 --- 10.0.0.2 ping statistics --- 00:18:52.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.300 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:18:52.300 00:18:52.300 --- 10.0.0.1 ping statistics --- 00:18:52.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.300 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3404090 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3404090 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 3404090 ']' 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:52.300 07:33:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.300 [2024-11-20 07:33:09.881153] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:18:52.300 [2024-11-20 07:33:09.881224] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:52.300 [2024-11-20 07:33:09.991131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:52.300 [2024-11-20 07:33:10.056106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.300 [2024-11-20 07:33:10.056165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.300 [2024-11-20 07:33:10.056174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.300 [2024-11-20 07:33:10.056182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.300 [2024-11-20 07:33:10.056189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.300 [2024-11-20 07:33:10.058778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:52.300 [2024-11-20 07:33:10.059084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:52.300 [2024-11-20 07:33:10.059246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:52.300 [2024-11-20 07:33:10.059249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:52.562 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:52.562 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:18:52.562 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:52.562 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:52.562 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.562 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.562 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:52.562 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.562 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.562 [2024-11-20 07:33:10.762193] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.823 Malloc0 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.823 [2024-11-20 07:33:10.816084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:52.823 { 00:18:52.823 "params": { 00:18:52.823 "name": "Nvme$subsystem", 00:18:52.823 "trtype": "$TEST_TRANSPORT", 00:18:52.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.823 "adrfam": "ipv4", 00:18:52.823 "trsvcid": "$NVMF_PORT", 00:18:52.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.823 "hdgst": ${hdgst:-false}, 00:18:52.823 "ddgst": ${ddgst:-false} 00:18:52.823 }, 00:18:52.823 "method": "bdev_nvme_attach_controller" 00:18:52.823 } 00:18:52.823 EOF 00:18:52.823 )") 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:52.823 07:33:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:52.823 "params": { 00:18:52.823 "name": "Nvme1", 00:18:52.823 "trtype": "tcp", 00:18:52.823 "traddr": "10.0.0.2", 00:18:52.823 "adrfam": "ipv4", 00:18:52.823 "trsvcid": "4420", 00:18:52.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.823 "hdgst": false, 00:18:52.823 "ddgst": false 00:18:52.823 }, 00:18:52.823 "method": "bdev_nvme_attach_controller" 00:18:52.823 }' 00:18:52.823 [2024-11-20 07:33:10.874814] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:18:52.823 [2024-11-20 07:33:10.874893] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3404651 ] 00:18:52.823 [2024-11-20 07:33:10.974569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:53.084 [2024-11-20 07:33:11.036259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.084 [2024-11-20 07:33:11.036424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.084 [2024-11-20 07:33:11.036424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.084 I/O targets: 00:18:53.084 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:53.084 00:18:53.084 00:18:53.084 CUnit - A unit testing framework for C - Version 2.1-3 00:18:53.084 http://cunit.sourceforge.net/ 00:18:53.084 00:18:53.084 00:18:53.084 Suite: bdevio tests on: Nvme1n1 00:18:53.345 Test: blockdev write read block ...passed 00:18:53.345 Test: blockdev write zeroes read block ...passed 00:18:53.345 Test: blockdev write zeroes read no split ...passed 00:18:53.345 Test: blockdev write zeroes read split ...passed 00:18:53.345 Test: blockdev write zeroes read split partial ...passed 00:18:53.345 Test: blockdev reset ...[2024-11-20 07:33:11.446481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:53.345 [2024-11-20 07:33:11.446578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1797400 (9): Bad file descriptor 00:18:53.345 [2024-11-20 07:33:11.505538] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:53.345 passed 00:18:53.345 Test: blockdev write read 8 blocks ...passed 00:18:53.345 Test: blockdev write read size > 128k ...passed 00:18:53.345 Test: blockdev write read invalid size ...passed 00:18:53.607 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:53.607 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:53.607 Test: blockdev write read max offset ...passed 00:18:53.607 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:53.607 Test: blockdev writev readv 8 blocks ...passed 00:18:53.607 Test: blockdev writev readv 30 x 1block ...passed 00:18:53.607 Test: blockdev writev readv block ...passed 00:18:53.607 Test: blockdev writev readv size > 128k ...passed 00:18:53.607 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:53.607 Test: blockdev comparev and writev ...[2024-11-20 07:33:11.722533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.607 [2024-11-20 07:33:11.722583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:53.607 [2024-11-20 07:33:11.722601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.607 [2024-11-20 07:33:11.722610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:53.607 [2024-11-20 07:33:11.722897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.607 [2024-11-20 07:33:11.722913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:53.607 [2024-11-20 07:33:11.722932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.607 [2024-11-20 07:33:11.722945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:53.607 [2024-11-20 07:33:11.723242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.607 [2024-11-20 07:33:11.723263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:53.607 [2024-11-20 07:33:11.723284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.607 [2024-11-20 07:33:11.723297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:53.607 [2024-11-20 07:33:11.723571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.607 [2024-11-20 07:33:11.723592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:53.607 [2024-11-20 07:33:11.723613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.607 [2024-11-20 07:33:11.723628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:53.607 passed 00:18:53.607 Test: blockdev nvme passthru rw ...passed 00:18:53.607 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:33:11.807010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.607 [2024-11-20 07:33:11.807045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:53.607 [2024-11-20 07:33:11.807156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.607 [2024-11-20 07:33:11.807166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:53.607 [2024-11-20 07:33:11.807282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.607 [2024-11-20 07:33:11.807292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:53.607 [2024-11-20 07:33:11.807402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.607 [2024-11-20 07:33:11.807412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:53.607 passed 00:18:53.868 Test: blockdev nvme admin passthru ...passed 00:18:53.868 Test: blockdev copy ...passed 00:18:53.868 00:18:53.868 Run Summary: Type Total Ran Passed Failed Inactive 00:18:53.868 suites 1 1 n/a 0 0 00:18:53.868 tests 23 23 23 0 0 00:18:53.868 asserts 152 152 152 0 n/a 00:18:53.868 00:18:53.868 Elapsed time = 1.234 seconds 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:54.130 rmmod nvme_tcp 00:18:54.130 rmmod nvme_fabrics 00:18:54.130 rmmod nvme_keyring 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3404090 ']' 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3404090 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 3404090 ']' 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 3404090 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:54.130 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3404090 00:18:54.392 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:18:54.392 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:18:54.392 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3404090' 00:18:54.392 killing process with pid 3404090 00:18:54.392 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 3404090 00:18:54.392 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 3404090 00:18:54.652 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:54.652 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:54.652 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:54.652 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:54.652 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:54.652 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:54.652 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:54.652 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:54.652 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:54.652 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.652 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.652 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.567 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:56.568 00:18:56.568 real 0m12.728s 00:18:56.568 user 0m14.617s 00:18:56.568 sys 0m6.745s 00:18:56.568 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:56.568 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.568 ************************************ 00:18:56.568 END TEST nvmf_bdevio_no_huge 00:18:56.568 ************************************ 00:18:56.568 07:33:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:56.568 07:33:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:56.568 07:33:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:56.568 07:33:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:56.568 ************************************ 00:18:56.568 START TEST nvmf_tls 00:18:56.568 ************************************ 00:18:56.568 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:56.830 * Looking for test storage... 00:18:56.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:56.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.830 --rc genhtml_branch_coverage=1 00:18:56.830 --rc genhtml_function_coverage=1 00:18:56.830 --rc genhtml_legend=1 00:18:56.830 --rc geninfo_all_blocks=1 00:18:56.830 --rc geninfo_unexecuted_blocks=1 00:18:56.830 00:18:56.830 ' 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:56.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.830 --rc genhtml_branch_coverage=1 00:18:56.830 --rc genhtml_function_coverage=1 00:18:56.830 --rc genhtml_legend=1 00:18:56.830 --rc geninfo_all_blocks=1 00:18:56.830 --rc geninfo_unexecuted_blocks=1 00:18:56.830 00:18:56.830 ' 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:56.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.830 --rc genhtml_branch_coverage=1 00:18:56.830 --rc genhtml_function_coverage=1 00:18:56.830 --rc genhtml_legend=1 00:18:56.830 --rc geninfo_all_blocks=1 00:18:56.830 --rc geninfo_unexecuted_blocks=1 00:18:56.830 00:18:56.830 ' 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:56.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.830 --rc genhtml_branch_coverage=1 00:18:56.830 --rc genhtml_function_coverage=1 00:18:56.830 --rc genhtml_legend=1 00:18:56.830 --rc geninfo_all_blocks=1 00:18:56.830 --rc geninfo_unexecuted_blocks=1 00:18:56.830 00:18:56.830 ' 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.830 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.831 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.831 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.831 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:56.831 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:56.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:56.831 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:04.989 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:04.989 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:04.990 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:04.990 Found net devices under 0000:31:00.0: cvl_0_0 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:04.990 Found net devices under 0000:31:00.1: cvl_0_1 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:04.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:04.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:19:04.990 00:19:04.990 --- 10.0.0.2 ping statistics --- 00:19:04.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.990 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:04.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:04.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:19:04.990 00:19:04.990 --- 10.0.0.1 ping statistics --- 00:19:04.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.990 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3409221 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3409221 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3409221 ']' 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:04.990 07:33:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.990 [2024-11-20 07:33:22.723515] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:19:04.990 [2024-11-20 07:33:22.723583] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.990 [2024-11-20 07:33:22.827257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.990 [2024-11-20 07:33:22.877924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.990 [2024-11-20 07:33:22.877979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.990 [2024-11-20 07:33:22.877988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.990 [2024-11-20 07:33:22.877994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.990 [2024-11-20 07:33:22.878001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.990 [2024-11-20 07:33:22.878836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.562 07:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:05.562 07:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:05.562 07:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:05.562 07:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:05.562 07:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.562 07:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.562 07:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:05.562 07:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:05.562 true 00:19:05.823 07:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:05.823 07:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:05.823 07:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:05.823 07:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:05.823 07:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:06.083 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:06.083 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:06.345 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:06.345 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:06.345 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:06.345 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:06.345 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:06.606 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:06.606 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:06.606 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:06.606 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:06.867 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:06.867 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:06.867 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:06.867 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:06.867 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:07.128 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:07.128 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:07.128 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:07.388 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:07.388 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:07.388 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:07.388 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:07.388 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:07.388 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:07.388 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:07.388 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:07.388 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:07.388 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:07.388 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:07.388 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.xNp1qyUTrn 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.MkHP4hzzD3 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.xNp1qyUTrn 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.MkHP4hzzD3 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:07.649 07:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:07.910 07:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.xNp1qyUTrn 00:19:07.910 07:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.xNp1qyUTrn 00:19:07.910 07:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:08.171 [2024-11-20 07:33:26.210006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.171 07:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:08.432 07:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:08.432 [2024-11-20 07:33:26.546823] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:08.432 [2024-11-20 07:33:26.547023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.432 07:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:08.692 malloc0 00:19:08.692 07:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:08.692 07:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.xNp1qyUTrn 00:19:08.952 07:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:09.212 07:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.xNp1qyUTrn 00:19:19.208 Initializing NVMe Controllers 00:19:19.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:19.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:19.208 Initialization complete. Launching workers. 00:19:19.208 ======================================================== 00:19:19.208 Latency(us) 00:19:19.208 Device Information : IOPS MiB/s Average min max 00:19:19.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18621.59 72.74 3437.10 1138.29 4497.64 00:19:19.208 ======================================================== 00:19:19.208 Total : 18621.59 72.74 3437.10 1138.29 4497.64 00:19:19.208 00:19:19.208 07:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xNp1qyUTrn 00:19:19.208 07:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:19.208 07:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:19.208 07:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:19.208 07:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xNp1qyUTrn 00:19:19.208 07:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:19.208 07:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3412234 00:19:19.208 07:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.208 07:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3412234 /var/tmp/bdevperf.sock 00:19:19.208 07:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:19.208 07:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3412234 ']' 00:19:19.208 07:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.208 07:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:19.208 07:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.208 07:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:19.208 07:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.208 [2024-11-20 07:33:37.353942] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:19:19.208 [2024-11-20 07:33:37.353998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412234 ] 00:19:19.468 [2024-11-20 07:33:37.442325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.468 [2024-11-20 07:33:37.477451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.038 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:20.038 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:20.038 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xNp1qyUTrn 00:19:20.299 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:20.299 [2024-11-20 07:33:38.494409] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.559 TLSTESTn1 00:19:20.559 07:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:20.559 Running I/O for 10 seconds... 00:19:22.893 5164.00 IOPS, 20.17 MiB/s [2024-11-20T06:33:42.044Z] 5604.50 IOPS, 21.89 MiB/s [2024-11-20T06:33:42.985Z] 5743.00 IOPS, 22.43 MiB/s [2024-11-20T06:33:43.927Z] 5754.50 IOPS, 22.48 MiB/s [2024-11-20T06:33:44.870Z] 5739.40 IOPS, 22.42 MiB/s [2024-11-20T06:33:45.810Z] 5715.67 IOPS, 22.33 MiB/s [2024-11-20T06:33:46.750Z] 5748.43 IOPS, 22.45 MiB/s [2024-11-20T06:33:48.136Z] 5755.50 IOPS, 22.48 MiB/s [2024-11-20T06:33:48.707Z] 5770.89 IOPS, 22.54 MiB/s [2024-11-20T06:33:48.968Z] 5814.90 IOPS, 22.71 MiB/s 00:19:30.758 Latency(us) 00:19:30.758 [2024-11-20T06:33:48.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.758 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:30.758 Verification LBA range: start 0x0 length 0x2000 00:19:30.758 TLSTESTn1 : 10.03 5809.70 22.69 0.00 0.00 21988.20 5352.11 44346.03 00:19:30.758 [2024-11-20T06:33:48.968Z] =================================================================================================================== 00:19:30.758 [2024-11-20T06:33:48.968Z] Total : 5809.70 22.69 0.00 0.00 21988.20 5352.11 44346.03 00:19:30.758 { 00:19:30.758 "results": [ 00:19:30.758 { 00:19:30.758 "job": "TLSTESTn1", 00:19:30.758 "core_mask": "0x4", 00:19:30.758 "workload": "verify", 00:19:30.758 "status": "finished", 00:19:30.758 "verify_range": { 00:19:30.758 "start": 0, 00:19:30.758 "length": 8192 00:19:30.758 }, 00:19:30.758 "queue_depth": 128, 00:19:30.758 "io_size": 4096, 00:19:30.758 "runtime": 10.030815, 00:19:30.758 "iops": 5809.697417408256, 00:19:30.758 "mibps": 22.694130536751, 00:19:30.758 "io_failed": 0, 00:19:30.758 "io_timeout": 0, 00:19:30.758 "avg_latency_us": 21988.195238291348, 00:19:30.758 "min_latency_us": 5352.106666666667, 00:19:30.758 "max_latency_us": 44346.026666666665 00:19:30.758 } 00:19:30.758 ], 00:19:30.758 "core_count": 1 00:19:30.758 } 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3412234 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3412234 ']' 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3412234 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3412234 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3412234' 00:19:30.758 killing process with pid 3412234 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3412234 00:19:30.758 Received shutdown signal, test time was about 10.000000 seconds 00:19:30.758 00:19:30.758 Latency(us) 00:19:30.758 [2024-11-20T06:33:48.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.758 [2024-11-20T06:33:48.968Z] =================================================================================================================== 00:19:30.758 [2024-11-20T06:33:48.968Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3412234 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MkHP4hzzD3 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MkHP4hzzD3 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MkHP4hzzD3 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MkHP4hzzD3 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3414355 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3414355 /var/tmp/bdevperf.sock 00:19:30.758 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3414355 ']' 00:19:30.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:30.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:30.759 07:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.020 [2024-11-20 07:33:48.974474] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:19:31.020 [2024-11-20 07:33:48.974531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414355 ] 00:19:31.020 [2024-11-20 07:33:49.057918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.020 [2024-11-20 07:33:49.086602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.589 07:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:31.589 07:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:31.589 07:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MkHP4hzzD3 00:19:31.849 07:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:32.111 [2024-11-20 07:33:50.078591] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.111 [2024-11-20 07:33:50.089536] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:32.111 [2024-11-20 07:33:50.089730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1289bc0 (107): Transport endpoint is not connected 00:19:32.111 [2024-11-20 07:33:50.090726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1289bc0 (9): Bad file descriptor 00:19:32.111 [2024-11-20 07:33:50.091728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:32.111 [2024-11-20 07:33:50.091735] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:32.111 [2024-11-20 07:33:50.091740] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:32.111 [2024-11-20 07:33:50.091750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:32.111 request: 00:19:32.111 { 00:19:32.111 "name": "TLSTEST", 00:19:32.111 "trtype": "tcp", 00:19:32.111 "traddr": "10.0.0.2", 00:19:32.111 "adrfam": "ipv4", 00:19:32.111 "trsvcid": "4420", 00:19:32.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:32.111 "prchk_reftag": false, 00:19:32.111 "prchk_guard": false, 00:19:32.111 "hdgst": false, 00:19:32.111 "ddgst": false, 00:19:32.111 "psk": "key0", 00:19:32.111 "allow_unrecognized_csi": false, 00:19:32.111 "method": "bdev_nvme_attach_controller", 00:19:32.111 "req_id": 1 00:19:32.111 } 00:19:32.111 Got JSON-RPC error response 00:19:32.111 response: 00:19:32.111 { 00:19:32.111 "code": -5, 00:19:32.111 "message": "Input/output error" 00:19:32.111 } 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3414355 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3414355 ']' 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3414355 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3414355 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3414355' 00:19:32.111 killing process with pid 3414355 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3414355 00:19:32.111 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.111 00:19:32.111 Latency(us) 00:19:32.111 [2024-11-20T06:33:50.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.111 [2024-11-20T06:33:50.321Z] =================================================================================================================== 00:19:32.111 [2024-11-20T06:33:50.321Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3414355 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xNp1qyUTrn 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xNp1qyUTrn 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xNp1qyUTrn 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xNp1qyUTrn 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3414605 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3414605 /var/tmp/bdevperf.sock 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3414605 ']' 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:32.111 07:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.374 [2024-11-20 07:33:50.327307] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:19:32.374 [2024-11-20 07:33:50.327361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414605 ] 00:19:32.374 [2024-11-20 07:33:50.413560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.374 [2024-11-20 07:33:50.441087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.946 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:32.946 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:32.946 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xNp1qyUTrn 00:19:33.207 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:33.469 [2024-11-20 07:33:51.469152] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.469 [2024-11-20 07:33:51.473768] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:33.469 [2024-11-20 07:33:51.473787] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:33.469 [2024-11-20 07:33:51.473806] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:33.469 [2024-11-20 07:33:51.474449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afdbc0 (107): Transport endpoint is not connected 00:19:33.469 [2024-11-20 07:33:51.475445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afdbc0 (9): Bad file descriptor 00:19:33.469 [2024-11-20 07:33:51.476447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:33.469 [2024-11-20 07:33:51.476454] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:33.469 [2024-11-20 07:33:51.476460] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:33.469 [2024-11-20 07:33:51.476468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:33.469 request: 00:19:33.469 { 00:19:33.469 "name": "TLSTEST", 00:19:33.469 "trtype": "tcp", 00:19:33.469 "traddr": "10.0.0.2", 00:19:33.469 "adrfam": "ipv4", 00:19:33.469 "trsvcid": "4420", 00:19:33.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.469 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:33.469 "prchk_reftag": false, 00:19:33.469 "prchk_guard": false, 00:19:33.469 "hdgst": false, 00:19:33.469 "ddgst": false, 00:19:33.469 "psk": "key0", 00:19:33.469 "allow_unrecognized_csi": false, 00:19:33.469 "method": "bdev_nvme_attach_controller", 00:19:33.469 "req_id": 1 00:19:33.469 } 00:19:33.469 Got JSON-RPC error response 00:19:33.469 response: 00:19:33.469 { 00:19:33.469 "code": -5, 00:19:33.469 "message": "Input/output error" 00:19:33.469 } 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3414605 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3414605 ']' 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3414605 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3414605 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3414605' 00:19:33.469 killing process with pid 3414605 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3414605 00:19:33.469 Received shutdown signal, test time was about 10.000000 seconds 00:19:33.469 00:19:33.469 Latency(us) 00:19:33.469 [2024-11-20T06:33:51.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.469 [2024-11-20T06:33:51.679Z] =================================================================================================================== 00:19:33.469 [2024-11-20T06:33:51.679Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3414605 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xNp1qyUTrn 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xNp1qyUTrn 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xNp1qyUTrn 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xNp1qyUTrn 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3414949 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3414949 /var/tmp/bdevperf.sock 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3414949 ']' 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:33.469 07:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.730 [2024-11-20 07:33:51.727188] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:19:33.730 [2024-11-20 07:33:51.727245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414949 ] 00:19:33.730 [2024-11-20 07:33:51.810066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.730 [2024-11-20 07:33:51.837988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.673 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:34.673 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:34.673 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xNp1qyUTrn 00:19:34.673 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:34.673 [2024-11-20 07:33:52.858059] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:34.673 [2024-11-20 07:33:52.866197] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:34.673 [2024-11-20 07:33:52.866215] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:34.673 [2024-11-20 07:33:52.866234] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:34.673 [2024-11-20 07:33:52.866321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8abbc0 (107): Transport endpoint is not connected 00:19:34.673 [2024-11-20 07:33:52.867310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8abbc0 (9): Bad file descriptor 00:19:34.673 [2024-11-20 07:33:52.868312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:34.673 [2024-11-20 07:33:52.868319] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:34.673 [2024-11-20 07:33:52.868325] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:34.673 [2024-11-20 07:33:52.868333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:34.673 request: 00:19:34.673 { 00:19:34.673 "name": "TLSTEST", 00:19:34.673 "trtype": "tcp", 00:19:34.673 "traddr": "10.0.0.2", 00:19:34.673 "adrfam": "ipv4", 00:19:34.673 "trsvcid": "4420", 00:19:34.673 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:34.673 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.673 "prchk_reftag": false, 00:19:34.673 "prchk_guard": false, 00:19:34.673 "hdgst": false, 00:19:34.673 "ddgst": false, 00:19:34.673 "psk": "key0", 00:19:34.673 "allow_unrecognized_csi": false, 00:19:34.673 "method": "bdev_nvme_attach_controller", 00:19:34.673 "req_id": 1 00:19:34.673 } 00:19:34.673 Got JSON-RPC error response 00:19:34.673 response: 00:19:34.673 { 00:19:34.673 "code": -5, 00:19:34.673 "message": "Input/output error" 00:19:34.673 } 00:19:34.934 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3414949 00:19:34.934 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3414949 ']' 00:19:34.934 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3414949 00:19:34.934 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:34.934 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:34.934 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3414949 00:19:34.934 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:34.934 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:34.934 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3414949' 00:19:34.934 killing process with pid 3414949 00:19:34.934 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3414949 00:19:34.934 Received shutdown signal, test time was about 10.000000 seconds 00:19:34.934 00:19:34.934 Latency(us) 00:19:34.934 [2024-11-20T06:33:53.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.934 [2024-11-20T06:33:53.144Z] =================================================================================================================== 00:19:34.934 [2024-11-20T06:33:53.144Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:34.934 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3414949 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3415290 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3415290 /var/tmp/bdevperf.sock 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3415290 ']' 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.934 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:34.935 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.935 [2024-11-20 07:33:53.124233] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:19:34.935 [2024-11-20 07:33:53.124288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415290 ] 00:19:35.196 [2024-11-20 07:33:53.210012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.196 [2024-11-20 07:33:53.237503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.768 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:35.768 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:35.768 07:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:36.029 [2024-11-20 07:33:54.064829] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:36.029 [2024-11-20 07:33:54.064852] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:36.029 request: 00:19:36.029 { 00:19:36.029 "name": "key0", 00:19:36.029 "path": "", 00:19:36.029 "method": "keyring_file_add_key", 00:19:36.029 "req_id": 1 00:19:36.029 } 00:19:36.029 Got JSON-RPC error response 00:19:36.029 response: 00:19:36.029 { 00:19:36.029 "code": -1, 00:19:36.029 "message": "Operation not permitted" 00:19:36.029 } 00:19:36.029 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:36.290 [2024-11-20 07:33:54.245359] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:36.290 [2024-11-20 07:33:54.245382] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:36.290 request: 00:19:36.290 { 00:19:36.290 "name": "TLSTEST", 00:19:36.290 "trtype": "tcp", 00:19:36.290 "traddr": "10.0.0.2", 00:19:36.290 "adrfam": "ipv4", 00:19:36.290 "trsvcid": "4420", 00:19:36.290 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.290 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:36.290 "prchk_reftag": false, 00:19:36.290 "prchk_guard": false, 00:19:36.290 "hdgst": false, 00:19:36.290 "ddgst": false, 00:19:36.290 "psk": "key0", 00:19:36.290 "allow_unrecognized_csi": false, 00:19:36.290 "method": "bdev_nvme_attach_controller", 00:19:36.290 "req_id": 1 00:19:36.290 } 00:19:36.290 Got JSON-RPC error response 00:19:36.290 response: 00:19:36.290 { 00:19:36.290 "code": -126, 00:19:36.290 "message": "Required key not available" 00:19:36.290 } 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3415290 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3415290 ']' 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3415290 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3415290 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3415290' 00:19:36.290 killing process with pid 3415290 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3415290 00:19:36.290 Received shutdown signal, test time was about 10.000000 seconds 00:19:36.290 00:19:36.290 Latency(us) 00:19:36.290 [2024-11-20T06:33:54.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.290 [2024-11-20T06:33:54.500Z] =================================================================================================================== 00:19:36.290 [2024-11-20T06:33:54.500Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3415290 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3409221 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3409221 ']' 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3409221 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:36.290 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3409221 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3409221' 00:19:36.551 killing process with pid 3409221 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3409221 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3409221 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.5ARcpBe19W 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.5ARcpBe19W 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3415642 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3415642 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3415642 ']' 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:36.551 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.551 [2024-11-20 07:33:54.720955] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:19:36.551 [2024-11-20 07:33:54.721011] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.812 [2024-11-20 07:33:54.812381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.812 [2024-11-20 07:33:54.842326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.812 [2024-11-20 07:33:54.842356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.812 [2024-11-20 07:33:54.842362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.812 [2024-11-20 07:33:54.842367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.812 [2024-11-20 07:33:54.842371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.812 [2024-11-20 07:33:54.842867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.383 07:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:37.383 07:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:37.383 07:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:37.383 07:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:37.383 07:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.383 07:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.383 07:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.5ARcpBe19W 00:19:37.383 07:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5ARcpBe19W 00:19:37.383 07:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:37.644 [2024-11-20 07:33:55.720392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.644 07:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:37.906 07:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:37.906 [2024-11-20 07:33:56.089308] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:37.906 [2024-11-20 07:33:56.089507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.906 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:38.167 malloc0 00:19:38.167 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:38.451 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5ARcpBe19W 00:19:38.763 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:38.763 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5ARcpBe19W 00:19:38.763 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:38.763 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:38.763 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:38.763 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5ARcpBe19W 00:19:38.763 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:38.763 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3416009 00:19:38.763 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.763 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3416009 /var/tmp/bdevperf.sock 00:19:38.763 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:38.763 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3416009 ']' 00:19:38.763 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.763 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:38.763 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.763 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:38.763 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.763 [2024-11-20 07:33:56.883318] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:19:38.763 [2024-11-20 07:33:56.883373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416009 ] 00:19:39.071 [2024-11-20 07:33:56.968821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.071 [2024-11-20 07:33:56.998207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.648 07:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:39.648 07:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:39.648 07:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5ARcpBe19W 00:19:39.909 07:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:39.909 [2024-11-20 07:33:58.030225] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.909 TLSTESTn1 00:19:40.170 07:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:40.170 Running I/O for 10 seconds... 00:19:42.056 5781.00 IOPS, 22.58 MiB/s [2024-11-20T06:34:01.652Z] 5809.00 IOPS, 22.69 MiB/s [2024-11-20T06:34:02.593Z] 5525.67 IOPS, 21.58 MiB/s [2024-11-20T06:34:03.537Z] 5739.50 IOPS, 22.42 MiB/s [2024-11-20T06:34:04.477Z] 5774.20 IOPS, 22.56 MiB/s [2024-11-20T06:34:05.418Z] 5778.50 IOPS, 22.57 MiB/s [2024-11-20T06:34:06.359Z] 5629.86 IOPS, 21.99 MiB/s [2024-11-20T06:34:07.301Z] 5684.12 IOPS, 22.20 MiB/s [2024-11-20T06:34:08.684Z] 5703.33 IOPS, 22.28 MiB/s [2024-11-20T06:34:08.684Z] 5685.80 IOPS, 22.21 MiB/s 00:19:50.474 Latency(us) 00:19:50.474 [2024-11-20T06:34:08.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.474 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:50.474 Verification LBA range: start 0x0 length 0x2000 00:19:50.474 TLSTESTn1 : 10.01 5690.35 22.23 0.00 0.00 22462.63 4915.20 25449.81 00:19:50.474 [2024-11-20T06:34:08.684Z] =================================================================================================================== 00:19:50.474 [2024-11-20T06:34:08.684Z] Total : 5690.35 22.23 0.00 0.00 22462.63 4915.20 25449.81 00:19:50.474 { 00:19:50.474 "results": [ 00:19:50.474 { 00:19:50.474 "job": "TLSTESTn1", 00:19:50.474 "core_mask": "0x4", 00:19:50.474 "workload": "verify", 00:19:50.474 "status": "finished", 00:19:50.474 "verify_range": { 00:19:50.474 "start": 0, 00:19:50.474 "length": 8192 00:19:50.474 }, 00:19:50.474 "queue_depth": 128, 00:19:50.474 "io_size": 4096, 00:19:50.474 "runtime": 10.01415, 00:19:50.474 "iops": 5690.348157357339, 00:19:50.474 "mibps": 22.227922489677105, 00:19:50.474 "io_failed": 0, 00:19:50.474 "io_timeout": 0, 00:19:50.474 "avg_latency_us": 22462.62961111891, 00:19:50.474 "min_latency_us": 4915.2, 00:19:50.474 "max_latency_us": 25449.81333333333 00:19:50.474 } 00:19:50.474 ], 00:19:50.474 "core_count": 1 00:19:50.474 } 00:19:50.474 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:50.474 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3416009 00:19:50.474 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3416009 ']' 00:19:50.474 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3416009 00:19:50.474 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:50.474 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:50.474 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3416009 00:19:50.474 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:50.474 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:50.474 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3416009' 00:19:50.474 killing process with pid 3416009 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3416009 00:19:50.475 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.475 00:19:50.475 Latency(us) 00:19:50.475 [2024-11-20T06:34:08.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.475 [2024-11-20T06:34:08.685Z] =================================================================================================================== 00:19:50.475 [2024-11-20T06:34:08.685Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3416009 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.5ARcpBe19W 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5ARcpBe19W 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5ARcpBe19W 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5ARcpBe19W 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5ARcpBe19W 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3418347 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3418347 /var/tmp/bdevperf.sock 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3418347 ']' 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:50.475 07:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.475 [2024-11-20 07:34:08.507728] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:19:50.475 [2024-11-20 07:34:08.507793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418347 ] 00:19:50.475 [2024-11-20 07:34:08.590877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.475 [2024-11-20 07:34:08.618533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.417 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:51.417 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:51.417 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5ARcpBe19W 00:19:51.417 [2024-11-20 07:34:09.458119] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5ARcpBe19W': 0100666 00:19:51.417 [2024-11-20 07:34:09.458143] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:51.417 request: 00:19:51.417 { 00:19:51.417 "name": "key0", 00:19:51.417 "path": "/tmp/tmp.5ARcpBe19W", 00:19:51.417 "method": "keyring_file_add_key", 00:19:51.417 "req_id": 1 00:19:51.417 } 00:19:51.417 Got JSON-RPC error response 00:19:51.417 response: 00:19:51.417 { 00:19:51.417 "code": -1, 00:19:51.417 "message": "Operation not permitted" 00:19:51.417 } 00:19:51.417 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:51.678 [2024-11-20 07:34:09.634634] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.678 [2024-11-20 07:34:09.634654] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:51.678 request: 00:19:51.678 { 00:19:51.678 "name": "TLSTEST", 00:19:51.678 "trtype": "tcp", 00:19:51.678 "traddr": "10.0.0.2", 00:19:51.678 "adrfam": "ipv4", 00:19:51.678 "trsvcid": "4420", 00:19:51.678 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.678 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.678 "prchk_reftag": false, 00:19:51.678 "prchk_guard": false, 00:19:51.678 "hdgst": false, 00:19:51.678 "ddgst": false, 00:19:51.678 "psk": "key0", 00:19:51.678 "allow_unrecognized_csi": false, 00:19:51.678 "method": "bdev_nvme_attach_controller", 00:19:51.678 "req_id": 1 00:19:51.678 } 00:19:51.678 Got JSON-RPC error response 00:19:51.678 response: 00:19:51.678 { 00:19:51.678 "code": -126, 00:19:51.678 "message": "Required key not available" 00:19:51.678 } 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3418347 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3418347 ']' 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3418347 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3418347 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3418347' 00:19:51.678 killing process with pid 3418347 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3418347 00:19:51.678 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.678 00:19:51.678 Latency(us) 00:19:51.678 [2024-11-20T06:34:09.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.678 [2024-11-20T06:34:09.888Z] =================================================================================================================== 00:19:51.678 [2024-11-20T06:34:09.888Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3418347 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3415642 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3415642 ']' 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3415642 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:51.678 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3415642 00:19:51.939 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:51.939 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:51.939 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3415642' 00:19:51.939 killing process with pid 3415642 00:19:51.939 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3415642 00:19:51.939 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3415642 00:19:51.939 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:51.939 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:51.939 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:51.939 07:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.939 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3418593 00:19:51.939 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3418593 00:19:51.939 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3418593 ']' 00:19:51.939 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.939 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:51.939 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.939 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:51.939 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.939 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:51.939 [2024-11-20 07:34:10.053679] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:19:51.939 [2024-11-20 07:34:10.053731] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.199 [2024-11-20 07:34:10.144631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.199 [2024-11-20 07:34:10.175568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.199 [2024-11-20 07:34:10.175600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.199 [2024-11-20 07:34:10.175605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.199 [2024-11-20 07:34:10.175610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.199 [2024-11-20 07:34:10.175614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.199 [2024-11-20 07:34:10.176112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.770 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:52.770 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:52.770 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.770 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:52.770 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.770 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.770 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.5ARcpBe19W 00:19:52.770 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:52.770 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.5ARcpBe19W 00:19:52.770 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:52.770 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:52.770 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:52.770 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:52.770 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.5ARcpBe19W 00:19:52.770 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5ARcpBe19W 00:19:52.770 07:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:53.031 [2024-11-20 07:34:11.038476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.031 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:53.031 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:53.291 [2024-11-20 07:34:11.375302] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.291 [2024-11-20 07:34:11.375498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.292 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:53.552 malloc0 00:19:53.552 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:53.552 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5ARcpBe19W 00:19:53.812 [2024-11-20 07:34:11.866445] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5ARcpBe19W': 0100666 00:19:53.812 [2024-11-20 07:34:11.866467] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:53.812 request: 00:19:53.812 { 00:19:53.812 "name": "key0", 00:19:53.812 "path": "/tmp/tmp.5ARcpBe19W", 00:19:53.812 "method": "keyring_file_add_key", 00:19:53.812 "req_id": 1 00:19:53.812 } 00:19:53.812 Got JSON-RPC error response 00:19:53.812 response: 00:19:53.812 { 00:19:53.812 "code": -1, 00:19:53.812 "message": "Operation not permitted" 00:19:53.812 } 00:19:53.812 07:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:54.072 [2024-11-20 07:34:12.034886] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:54.072 [2024-11-20 07:34:12.034916] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:54.072 request: 00:19:54.072 { 00:19:54.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.072 "host": "nqn.2016-06.io.spdk:host1", 00:19:54.072 "psk": "key0", 00:19:54.072 "method": "nvmf_subsystem_add_host", 00:19:54.072 "req_id": 1 00:19:54.072 } 00:19:54.072 Got JSON-RPC error response 00:19:54.072 response: 00:19:54.072 { 00:19:54.072 "code": -32603, 00:19:54.072 "message": "Internal error" 00:19:54.072 } 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3418593 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3418593 ']' 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3418593 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3418593 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3418593' 00:19:54.072 killing process with pid 3418593 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3418593 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3418593 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.5ARcpBe19W 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:54.072 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:54.073 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:54.073 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.073 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3419078 00:19:54.073 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3419078 00:19:54.073 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:54.073 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3419078 ']' 00:19:54.073 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.073 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:54.073 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.073 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:54.073 07:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.333 [2024-11-20 07:34:12.297619] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:19:54.333 [2024-11-20 07:34:12.297677] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.333 [2024-11-20 07:34:12.387464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.333 [2024-11-20 07:34:12.417101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.333 [2024-11-20 07:34:12.417129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.333 [2024-11-20 07:34:12.417134] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.333 [2024-11-20 07:34:12.417139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.333 [2024-11-20 07:34:12.417143] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.333 [2024-11-20 07:34:12.417481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.904 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:54.904 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:54.904 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:54.904 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:54.904 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.164 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.164 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.5ARcpBe19W 00:19:55.164 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5ARcpBe19W 00:19:55.164 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:55.164 [2024-11-20 07:34:13.270195] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.164 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:55.425 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:55.425 [2024-11-20 07:34:13.607029] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:55.425 [2024-11-20 07:34:13.607221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.425 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:55.685 malloc0 00:19:55.685 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:55.945 07:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5ARcpBe19W 00:19:55.945 07:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:56.205 07:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3419444 00:19:56.205 07:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:56.205 07:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:56.205 07:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3419444 /var/tmp/bdevperf.sock 00:19:56.205 07:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3419444 ']' 00:19:56.205 07:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.205 07:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:56.205 07:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.205 07:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:56.205 07:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.206 [2024-11-20 07:34:14.347509] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:19:56.206 [2024-11-20 07:34:14.347558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419444 ] 00:19:56.466 [2024-11-20 07:34:14.433637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.466 [2024-11-20 07:34:14.462740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.037 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:57.037 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:57.037 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5ARcpBe19W 00:19:57.299 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:57.299 [2024-11-20 07:34:15.442799] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.560 TLSTESTn1 00:19:57.560 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:57.821 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:57.821 "subsystems": [ 00:19:57.821 { 00:19:57.821 "subsystem": "keyring", 00:19:57.821 "config": [ 00:19:57.821 { 00:19:57.821 "method": "keyring_file_add_key", 00:19:57.821 "params": { 00:19:57.821 "name": "key0", 00:19:57.821 "path": "/tmp/tmp.5ARcpBe19W" 00:19:57.821 } 00:19:57.821 } 00:19:57.821 ] 00:19:57.821 }, 00:19:57.821 { 00:19:57.821 "subsystem": "iobuf", 00:19:57.821 "config": [ 00:19:57.821 { 00:19:57.821 "method": "iobuf_set_options", 00:19:57.821 "params": { 00:19:57.821 "small_pool_count": 8192, 00:19:57.821 "large_pool_count": 1024, 00:19:57.821 "small_bufsize": 8192, 00:19:57.821 "large_bufsize": 135168, 00:19:57.821 "enable_numa": false 00:19:57.821 } 00:19:57.821 } 00:19:57.821 ] 00:19:57.821 }, 00:19:57.821 { 00:19:57.821 "subsystem": "sock", 00:19:57.821 "config": [ 00:19:57.821 { 00:19:57.821 "method": "sock_set_default_impl", 00:19:57.821 "params": { 00:19:57.821 "impl_name": "posix" 00:19:57.821 } 00:19:57.821 }, 00:19:57.821 { 00:19:57.821 "method": "sock_impl_set_options", 00:19:57.821 "params": { 00:19:57.821 "impl_name": "ssl", 00:19:57.821 "recv_buf_size": 4096, 00:19:57.821 "send_buf_size": 4096, 00:19:57.821 "enable_recv_pipe": true, 00:19:57.821 "enable_quickack": false, 00:19:57.821 "enable_placement_id": 0, 00:19:57.821 "enable_zerocopy_send_server": true, 00:19:57.821 "enable_zerocopy_send_client": false, 00:19:57.821 "zerocopy_threshold": 0, 00:19:57.821 "tls_version": 0, 00:19:57.821 "enable_ktls": false 00:19:57.821 } 00:19:57.821 }, 00:19:57.821 { 00:19:57.821 "method": "sock_impl_set_options", 00:19:57.821 "params": { 00:19:57.821 "impl_name": "posix", 00:19:57.821 "recv_buf_size": 2097152, 00:19:57.821 "send_buf_size": 2097152, 00:19:57.821 "enable_recv_pipe": true, 00:19:57.821 "enable_quickack": false, 00:19:57.821 "enable_placement_id": 0, 00:19:57.821 "enable_zerocopy_send_server": true, 00:19:57.821 "enable_zerocopy_send_client": false, 00:19:57.821 "zerocopy_threshold": 0, 00:19:57.821 "tls_version": 0, 00:19:57.821 "enable_ktls": false 00:19:57.821 } 00:19:57.821 } 00:19:57.821 ] 00:19:57.821 }, 00:19:57.821 { 00:19:57.821 "subsystem": "vmd", 00:19:57.821 "config": [] 00:19:57.821 }, 00:19:57.821 { 00:19:57.821 "subsystem": "accel", 00:19:57.821 "config": [ 00:19:57.821 { 00:19:57.821 "method": "accel_set_options", 00:19:57.821 "params": { 00:19:57.821 "small_cache_size": 128, 00:19:57.821 "large_cache_size": 16, 00:19:57.821 "task_count": 2048, 00:19:57.821 "sequence_count": 2048, 00:19:57.821 "buf_count": 2048 00:19:57.821 } 00:19:57.821 } 00:19:57.821 ] 00:19:57.821 }, 00:19:57.821 { 00:19:57.821 "subsystem": "bdev", 00:19:57.821 "config": [ 00:19:57.821 { 00:19:57.821 "method": "bdev_set_options", 00:19:57.821 "params": { 00:19:57.821 "bdev_io_pool_size": 65535, 00:19:57.821 "bdev_io_cache_size": 256, 00:19:57.821 "bdev_auto_examine": true, 00:19:57.821 "iobuf_small_cache_size": 128, 00:19:57.821 "iobuf_large_cache_size": 16 00:19:57.821 } 00:19:57.821 }, 00:19:57.821 { 00:19:57.821 "method": "bdev_raid_set_options", 00:19:57.821 "params": { 00:19:57.821 "process_window_size_kb": 1024, 00:19:57.821 "process_max_bandwidth_mb_sec": 0 00:19:57.821 } 00:19:57.821 }, 00:19:57.821 { 00:19:57.821 "method": "bdev_iscsi_set_options", 00:19:57.821 "params": { 00:19:57.821 "timeout_sec": 30 00:19:57.821 } 00:19:57.821 }, 00:19:57.821 { 00:19:57.821 "method": "bdev_nvme_set_options", 00:19:57.821 "params": { 00:19:57.821 "action_on_timeout": "none", 00:19:57.821 "timeout_us": 0, 00:19:57.821 "timeout_admin_us": 0, 00:19:57.821 "keep_alive_timeout_ms": 10000, 00:19:57.821 "arbitration_burst": 0, 00:19:57.821 "low_priority_weight": 0, 00:19:57.821 "medium_priority_weight": 0, 00:19:57.821 "high_priority_weight": 0, 00:19:57.821 "nvme_adminq_poll_period_us": 10000, 00:19:57.821 "nvme_ioq_poll_period_us": 0, 00:19:57.821 "io_queue_requests": 0, 00:19:57.821 "delay_cmd_submit": true, 00:19:57.821 "transport_retry_count": 4, 00:19:57.821 "bdev_retry_count": 3, 00:19:57.821 "transport_ack_timeout": 0, 00:19:57.821 "ctrlr_loss_timeout_sec": 0, 00:19:57.821 "reconnect_delay_sec": 0, 00:19:57.821 "fast_io_fail_timeout_sec": 0, 00:19:57.821 "disable_auto_failback": false, 00:19:57.821 "generate_uuids": false, 00:19:57.821 "transport_tos": 0, 00:19:57.822 "nvme_error_stat": false, 00:19:57.822 "rdma_srq_size": 0, 00:19:57.822 "io_path_stat": false, 00:19:57.822 "allow_accel_sequence": false, 00:19:57.822 "rdma_max_cq_size": 0, 00:19:57.822 "rdma_cm_event_timeout_ms": 0, 00:19:57.822 "dhchap_digests": [ 00:19:57.822 "sha256", 00:19:57.822 "sha384", 00:19:57.822 "sha512" 00:19:57.822 ], 00:19:57.822 "dhchap_dhgroups": [ 00:19:57.822 "null", 00:19:57.822 "ffdhe2048", 00:19:57.822 "ffdhe3072", 00:19:57.822 "ffdhe4096", 00:19:57.822 "ffdhe6144", 00:19:57.822 "ffdhe8192" 00:19:57.822 ] 00:19:57.822 } 00:19:57.822 }, 00:19:57.822 { 00:19:57.822 "method": "bdev_nvme_set_hotplug", 00:19:57.822 "params": { 00:19:57.822 "period_us": 100000, 00:19:57.822 "enable": false 00:19:57.822 } 00:19:57.822 }, 00:19:57.822 { 00:19:57.822 "method": "bdev_malloc_create", 00:19:57.822 "params": { 00:19:57.822 "name": "malloc0", 00:19:57.822 "num_blocks": 8192, 00:19:57.822 "block_size": 4096, 00:19:57.822 "physical_block_size": 4096, 00:19:57.822 "uuid": "45558f29-9abc-4b87-8595-2ad8159d3896", 00:19:57.822 "optimal_io_boundary": 0, 00:19:57.822 "md_size": 0, 00:19:57.822 "dif_type": 0, 00:19:57.822 "dif_is_head_of_md": false, 00:19:57.822 "dif_pi_format": 0 00:19:57.822 } 00:19:57.822 }, 00:19:57.822 { 00:19:57.822 "method": "bdev_wait_for_examine" 00:19:57.822 } 00:19:57.822 ] 00:19:57.822 }, 00:19:57.822 { 00:19:57.822 "subsystem": "nbd", 00:19:57.822 "config": [] 00:19:57.822 }, 00:19:57.822 { 00:19:57.822 "subsystem": "scheduler", 00:19:57.822 "config": [ 00:19:57.822 { 00:19:57.822 "method": "framework_set_scheduler", 00:19:57.822 "params": { 00:19:57.822 "name": "static" 00:19:57.822 } 00:19:57.822 } 00:19:57.822 ] 00:19:57.822 }, 00:19:57.822 { 00:19:57.822 "subsystem": "nvmf", 00:19:57.822 "config": [ 00:19:57.822 { 00:19:57.822 "method": "nvmf_set_config", 00:19:57.822 "params": { 00:19:57.822 "discovery_filter": "match_any", 00:19:57.822 "admin_cmd_passthru": { 00:19:57.822 "identify_ctrlr": false 00:19:57.822 }, 00:19:57.822 "dhchap_digests": [ 00:19:57.822 "sha256", 00:19:57.822 "sha384", 00:19:57.822 "sha512" 00:19:57.822 ], 00:19:57.822 "dhchap_dhgroups": [ 00:19:57.822 "null", 00:19:57.822 "ffdhe2048", 00:19:57.822 "ffdhe3072", 00:19:57.822 "ffdhe4096", 00:19:57.822 "ffdhe6144", 00:19:57.822 "ffdhe8192" 00:19:57.822 ] 00:19:57.822 } 00:19:57.822 }, 00:19:57.822 { 00:19:57.822 "method": "nvmf_set_max_subsystems", 00:19:57.822 "params": { 00:19:57.822 "max_subsystems": 1024 00:19:57.822 } 00:19:57.822 }, 00:19:57.822 { 00:19:57.822 "method": "nvmf_set_crdt", 00:19:57.822 "params": { 00:19:57.822 "crdt1": 0, 00:19:57.822 "crdt2": 0, 00:19:57.822 "crdt3": 0 00:19:57.822 } 00:19:57.822 }, 00:19:57.822 { 00:19:57.822 "method": "nvmf_create_transport", 00:19:57.822 "params": { 00:19:57.822 "trtype": "TCP", 00:19:57.822 "max_queue_depth": 128, 00:19:57.822 "max_io_qpairs_per_ctrlr": 127, 00:19:57.822 "in_capsule_data_size": 4096, 00:19:57.822 "max_io_size": 131072, 00:19:57.822 "io_unit_size": 131072, 00:19:57.822 "max_aq_depth": 128, 00:19:57.822 "num_shared_buffers": 511, 00:19:57.822 "buf_cache_size": 4294967295, 00:19:57.822 "dif_insert_or_strip": false, 00:19:57.822 "zcopy": false, 00:19:57.822 "c2h_success": false, 00:19:57.822 "sock_priority": 0, 00:19:57.822 "abort_timeout_sec": 1, 00:19:57.822 "ack_timeout": 0, 00:19:57.822 "data_wr_pool_size": 0 00:19:57.822 } 00:19:57.822 }, 00:19:57.822 { 00:19:57.822 "method": "nvmf_create_subsystem", 00:19:57.822 "params": { 00:19:57.822 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.822 "allow_any_host": false, 00:19:57.822 "serial_number": "SPDK00000000000001", 00:19:57.822 "model_number": "SPDK bdev Controller", 00:19:57.822 "max_namespaces": 10, 00:19:57.822 "min_cntlid": 1, 00:19:57.822 "max_cntlid": 65519, 00:19:57.822 "ana_reporting": false 00:19:57.822 } 00:19:57.822 }, 00:19:57.822 { 00:19:57.822 "method": "nvmf_subsystem_add_host", 00:19:57.822 "params": { 00:19:57.822 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.822 "host": "nqn.2016-06.io.spdk:host1", 00:19:57.822 "psk": "key0" 00:19:57.822 } 00:19:57.822 }, 00:19:57.822 { 00:19:57.822 "method": "nvmf_subsystem_add_ns", 00:19:57.822 "params": { 00:19:57.822 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.822 "namespace": { 00:19:57.822 "nsid": 1, 00:19:57.822 "bdev_name": "malloc0", 00:19:57.822 "nguid": "45558F299ABC4B8785952AD8159D3896", 00:19:57.822 "uuid": "45558f29-9abc-4b87-8595-2ad8159d3896", 00:19:57.822 "no_auto_visible": false 00:19:57.822 } 00:19:57.822 } 00:19:57.822 }, 00:19:57.822 { 00:19:57.822 "method": "nvmf_subsystem_add_listener", 00:19:57.822 "params": { 00:19:57.822 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.822 "listen_address": { 00:19:57.822 "trtype": "TCP", 00:19:57.822 "adrfam": "IPv4", 00:19:57.822 "traddr": "10.0.0.2", 00:19:57.822 "trsvcid": "4420" 00:19:57.822 }, 00:19:57.822 "secure_channel": true 00:19:57.822 } 00:19:57.822 } 00:19:57.822 ] 00:19:57.822 } 00:19:57.822 ] 00:19:57.822 }' 00:19:57.822 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:57.822 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:57.822 "subsystems": [ 00:19:57.822 { 00:19:57.822 "subsystem": "keyring", 00:19:57.822 "config": [ 00:19:57.822 { 00:19:57.822 "method": "keyring_file_add_key", 00:19:57.822 "params": { 00:19:57.822 "name": "key0", 00:19:57.822 "path": "/tmp/tmp.5ARcpBe19W" 00:19:57.822 } 00:19:57.822 } 00:19:57.822 ] 00:19:57.822 }, 00:19:57.822 { 00:19:57.822 "subsystem": "iobuf", 00:19:57.822 "config": [ 00:19:57.822 { 00:19:57.822 "method": "iobuf_set_options", 00:19:57.822 "params": { 00:19:57.822 "small_pool_count": 8192, 00:19:57.822 "large_pool_count": 1024, 00:19:57.822 "small_bufsize": 8192, 00:19:57.822 "large_bufsize": 135168, 00:19:57.822 "enable_numa": false 00:19:57.822 } 00:19:57.822 } 00:19:57.822 ] 00:19:57.822 }, 00:19:57.822 { 00:19:57.822 "subsystem": "sock", 00:19:57.822 "config": [ 00:19:57.822 { 00:19:57.822 "method": "sock_set_default_impl", 00:19:57.822 "params": { 00:19:57.822 "impl_name": "posix" 00:19:57.822 } 00:19:57.822 }, 00:19:57.822 { 00:19:57.822 "method": "sock_impl_set_options", 00:19:57.822 "params": { 00:19:57.822 "impl_name": "ssl", 00:19:57.822 "recv_buf_size": 4096, 00:19:57.822 "send_buf_size": 4096, 00:19:57.822 "enable_recv_pipe": true, 00:19:57.822 "enable_quickack": false, 00:19:57.822 "enable_placement_id": 0, 00:19:57.822 "enable_zerocopy_send_server": true, 00:19:57.822 "enable_zerocopy_send_client": false, 00:19:57.822 "zerocopy_threshold": 0, 00:19:57.822 "tls_version": 0, 00:19:57.822 "enable_ktls": false 00:19:57.822 } 00:19:57.822 }, 00:19:57.822 { 00:19:57.822 "method": "sock_impl_set_options", 00:19:57.822 "params": { 00:19:57.822 "impl_name": "posix", 00:19:57.822 "recv_buf_size": 2097152, 00:19:57.822 "send_buf_size": 2097152, 00:19:57.822 "enable_recv_pipe": true, 00:19:57.822 "enable_quickack": false, 00:19:57.822 "enable_placement_id": 0, 00:19:57.823 "enable_zerocopy_send_server": true, 00:19:57.823 "enable_zerocopy_send_client": false, 00:19:57.823 "zerocopy_threshold": 0, 00:19:57.823 "tls_version": 0, 00:19:57.823 "enable_ktls": false 00:19:57.823 } 00:19:57.823 } 00:19:57.823 ] 00:19:57.823 }, 00:19:57.823 { 00:19:57.823 "subsystem": "vmd", 00:19:57.823 "config": [] 00:19:57.823 }, 00:19:57.823 { 00:19:57.823 "subsystem": "accel", 00:19:57.823 "config": [ 00:19:57.823 { 00:19:57.823 "method": "accel_set_options", 00:19:57.823 "params": { 00:19:57.823 "small_cache_size": 128, 00:19:57.823 "large_cache_size": 16, 00:19:57.823 "task_count": 2048, 00:19:57.823 "sequence_count": 2048, 00:19:57.823 "buf_count": 2048 00:19:57.823 } 00:19:57.823 } 00:19:57.823 ] 00:19:57.823 }, 00:19:57.823 { 00:19:57.823 "subsystem": "bdev", 00:19:57.823 "config": [ 00:19:57.823 { 00:19:57.823 "method": "bdev_set_options", 00:19:57.823 "params": { 00:19:57.823 "bdev_io_pool_size": 65535, 00:19:57.823 "bdev_io_cache_size": 256, 00:19:57.823 "bdev_auto_examine": true, 00:19:57.823 "iobuf_small_cache_size": 128, 00:19:57.823 "iobuf_large_cache_size": 16 00:19:57.823 } 00:19:57.823 }, 00:19:57.823 { 00:19:57.823 "method": "bdev_raid_set_options", 00:19:57.823 "params": { 00:19:57.823 "process_window_size_kb": 1024, 00:19:57.823 "process_max_bandwidth_mb_sec": 0 00:19:57.823 } 00:19:57.823 }, 00:19:57.823 { 00:19:57.823 "method": "bdev_iscsi_set_options", 00:19:57.823 "params": { 00:19:57.823 "timeout_sec": 30 00:19:57.823 } 00:19:57.823 }, 00:19:57.823 { 00:19:57.823 "method": "bdev_nvme_set_options", 00:19:57.823 "params": { 00:19:57.823 "action_on_timeout": "none", 00:19:57.823 "timeout_us": 0, 00:19:57.823 "timeout_admin_us": 0, 00:19:57.823 "keep_alive_timeout_ms": 10000, 00:19:57.823 "arbitration_burst": 0, 00:19:57.823 "low_priority_weight": 0, 00:19:57.823 "medium_priority_weight": 0, 00:19:57.823 "high_priority_weight": 0, 00:19:57.823 "nvme_adminq_poll_period_us": 10000, 00:19:57.823 "nvme_ioq_poll_period_us": 0, 00:19:57.823 "io_queue_requests": 512, 00:19:57.823 "delay_cmd_submit": true, 00:19:57.823 "transport_retry_count": 4, 00:19:57.823 "bdev_retry_count": 3, 00:19:57.823 "transport_ack_timeout": 0, 00:19:57.823 "ctrlr_loss_timeout_sec": 0, 00:19:57.823 "reconnect_delay_sec": 0, 00:19:57.823 "fast_io_fail_timeout_sec": 0, 00:19:57.823 "disable_auto_failback": false, 00:19:57.823 "generate_uuids": false, 00:19:57.823 "transport_tos": 0, 00:19:57.823 "nvme_error_stat": false, 00:19:57.823 "rdma_srq_size": 0, 00:19:57.823 "io_path_stat": false, 00:19:57.823 "allow_accel_sequence": false, 00:19:57.823 "rdma_max_cq_size": 0, 00:19:57.823 "rdma_cm_event_timeout_ms": 0, 00:19:57.823 "dhchap_digests": [ 00:19:57.823 "sha256", 00:19:57.823 "sha384", 00:19:57.823 "sha512" 00:19:57.823 ], 00:19:57.823 "dhchap_dhgroups": [ 00:19:57.823 "null", 00:19:57.823 "ffdhe2048", 00:19:57.823 "ffdhe3072", 00:19:57.823 "ffdhe4096", 00:19:57.823 "ffdhe6144", 00:19:57.823 "ffdhe8192" 00:19:57.823 ] 00:19:57.823 } 00:19:57.823 }, 00:19:57.823 { 00:19:57.823 "method": "bdev_nvme_attach_controller", 00:19:57.823 "params": { 00:19:57.823 "name": "TLSTEST", 00:19:57.823 "trtype": "TCP", 00:19:57.823 "adrfam": "IPv4", 00:19:57.823 "traddr": "10.0.0.2", 00:19:57.823 "trsvcid": "4420", 00:19:57.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.823 "prchk_reftag": false, 00:19:57.823 "prchk_guard": false, 00:19:57.823 "ctrlr_loss_timeout_sec": 0, 00:19:57.823 "reconnect_delay_sec": 0, 00:19:57.823 "fast_io_fail_timeout_sec": 0, 00:19:57.823 "psk": "key0", 00:19:57.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.823 "hdgst": false, 00:19:57.823 "ddgst": false, 00:19:57.823 "multipath": "multipath" 00:19:57.823 } 00:19:57.823 }, 00:19:57.823 { 00:19:57.823 "method": "bdev_nvme_set_hotplug", 00:19:57.823 "params": { 00:19:57.823 "period_us": 100000, 00:19:57.823 "enable": false 00:19:57.823 } 00:19:57.823 }, 00:19:57.823 { 00:19:57.823 "method": "bdev_wait_for_examine" 00:19:57.823 } 00:19:57.823 ] 00:19:57.823 }, 00:19:57.823 { 00:19:57.823 "subsystem": "nbd", 00:19:57.823 "config": [] 00:19:57.823 } 00:19:57.823 ] 00:19:57.823 }' 00:19:57.823 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3419444 00:19:57.823 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3419444 ']' 00:19:57.823 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3419444 00:19:57.823 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3419444 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3419444' 00:19:58.083 killing process with pid 3419444 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3419444 00:19:58.083 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.083 00:19:58.083 Latency(us) 00:19:58.083 [2024-11-20T06:34:16.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.083 [2024-11-20T06:34:16.293Z] =================================================================================================================== 00:19:58.083 [2024-11-20T06:34:16.293Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3419444 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3419078 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3419078 ']' 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3419078 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3419078 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3419078' 00:19:58.083 killing process with pid 3419078 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3419078 00:19:58.083 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3419078 00:19:58.345 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:58.345 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:58.345 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:58.345 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.345 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:58.345 "subsystems": [ 00:19:58.345 { 00:19:58.345 "subsystem": "keyring", 00:19:58.345 "config": [ 00:19:58.345 { 00:19:58.345 "method": "keyring_file_add_key", 00:19:58.345 "params": { 00:19:58.345 "name": "key0", 00:19:58.345 "path": "/tmp/tmp.5ARcpBe19W" 00:19:58.345 } 00:19:58.345 } 00:19:58.345 ] 00:19:58.345 }, 00:19:58.345 { 00:19:58.345 "subsystem": "iobuf", 00:19:58.345 "config": [ 00:19:58.345 { 00:19:58.345 "method": "iobuf_set_options", 00:19:58.345 "params": { 00:19:58.345 "small_pool_count": 8192, 00:19:58.345 "large_pool_count": 1024, 00:19:58.345 "small_bufsize": 8192, 00:19:58.345 "large_bufsize": 135168, 00:19:58.345 "enable_numa": false 00:19:58.345 } 00:19:58.345 } 00:19:58.345 ] 00:19:58.345 }, 00:19:58.345 { 00:19:58.345 "subsystem": "sock", 00:19:58.345 "config": [ 00:19:58.345 { 00:19:58.345 "method": "sock_set_default_impl", 00:19:58.345 "params": { 00:19:58.345 "impl_name": "posix" 00:19:58.345 } 00:19:58.345 }, 00:19:58.345 { 00:19:58.345 "method": "sock_impl_set_options", 00:19:58.345 "params": { 00:19:58.345 "impl_name": "ssl", 00:19:58.345 "recv_buf_size": 4096, 00:19:58.345 "send_buf_size": 4096, 00:19:58.345 "enable_recv_pipe": true, 00:19:58.345 "enable_quickack": false, 00:19:58.345 "enable_placement_id": 0, 00:19:58.345 "enable_zerocopy_send_server": true, 00:19:58.345 "enable_zerocopy_send_client": false, 00:19:58.345 "zerocopy_threshold": 0, 00:19:58.345 "tls_version": 0, 00:19:58.345 "enable_ktls": false 00:19:58.345 } 00:19:58.345 }, 00:19:58.345 { 00:19:58.345 "method": "sock_impl_set_options", 00:19:58.345 "params": { 00:19:58.345 "impl_name": "posix", 00:19:58.345 "recv_buf_size": 2097152, 00:19:58.345 "send_buf_size": 2097152, 00:19:58.345 "enable_recv_pipe": true, 00:19:58.345 "enable_quickack": false, 00:19:58.345 "enable_placement_id": 0, 00:19:58.345 "enable_zerocopy_send_server": true, 00:19:58.345 "enable_zerocopy_send_client": false, 00:19:58.345 "zerocopy_threshold": 0, 00:19:58.345 "tls_version": 0, 00:19:58.345 "enable_ktls": false 00:19:58.345 } 00:19:58.345 } 00:19:58.345 ] 00:19:58.345 }, 00:19:58.345 { 00:19:58.345 "subsystem": "vmd", 00:19:58.345 "config": [] 00:19:58.345 }, 00:19:58.345 { 00:19:58.345 "subsystem": "accel", 00:19:58.345 "config": [ 00:19:58.345 { 00:19:58.345 "method": "accel_set_options", 00:19:58.345 "params": { 00:19:58.345 "small_cache_size": 128, 00:19:58.345 "large_cache_size": 16, 00:19:58.345 "task_count": 2048, 00:19:58.345 "sequence_count": 2048, 00:19:58.345 "buf_count": 2048 00:19:58.345 } 00:19:58.345 } 00:19:58.345 ] 00:19:58.345 }, 00:19:58.345 { 00:19:58.345 "subsystem": "bdev", 00:19:58.345 "config": [ 00:19:58.345 { 00:19:58.345 "method": "bdev_set_options", 00:19:58.345 "params": { 00:19:58.345 "bdev_io_pool_size": 65535, 00:19:58.345 "bdev_io_cache_size": 256, 00:19:58.345 "bdev_auto_examine": true, 00:19:58.345 "iobuf_small_cache_size": 128, 00:19:58.345 "iobuf_large_cache_size": 16 00:19:58.345 } 00:19:58.345 }, 00:19:58.345 { 00:19:58.345 "method": "bdev_raid_set_options", 00:19:58.345 "params": { 00:19:58.345 "process_window_size_kb": 1024, 00:19:58.345 "process_max_bandwidth_mb_sec": 0 00:19:58.345 } 00:19:58.345 }, 00:19:58.345 { 00:19:58.345 "method": "bdev_iscsi_set_options", 00:19:58.345 "params": { 00:19:58.345 "timeout_sec": 30 00:19:58.345 } 00:19:58.345 }, 00:19:58.345 { 00:19:58.345 "method": "bdev_nvme_set_options", 00:19:58.345 "params": { 00:19:58.345 "action_on_timeout": "none", 00:19:58.345 "timeout_us": 0, 00:19:58.345 "timeout_admin_us": 0, 00:19:58.345 "keep_alive_timeout_ms": 10000, 00:19:58.345 "arbitration_burst": 0, 00:19:58.345 "low_priority_weight": 0, 00:19:58.345 "medium_priority_weight": 0, 00:19:58.345 "high_priority_weight": 0, 00:19:58.345 "nvme_adminq_poll_period_us": 10000, 00:19:58.345 "nvme_ioq_poll_period_us": 0, 00:19:58.345 "io_queue_requests": 0, 00:19:58.345 "delay_cmd_submit": true, 00:19:58.345 "transport_retry_count": 4, 00:19:58.345 "bdev_retry_count": 3, 00:19:58.345 "transport_ack_timeout": 0, 00:19:58.345 "ctrlr_loss_timeout_sec": 0, 00:19:58.345 "reconnect_delay_sec": 0, 00:19:58.345 "fast_io_fail_timeout_sec": 0, 00:19:58.345 "disable_auto_failback": false, 00:19:58.345 "generate_uuids": false, 00:19:58.345 "transport_tos": 0, 00:19:58.345 "nvme_error_stat": false, 00:19:58.345 "rdma_srq_size": 0, 00:19:58.345 "io_path_stat": false, 00:19:58.345 "allow_accel_sequence": false, 00:19:58.345 "rdma_max_cq_size": 0, 00:19:58.345 "rdma_cm_event_timeout_ms": 0, 00:19:58.345 "dhchap_digests": [ 00:19:58.345 "sha256", 00:19:58.345 "sha384", 00:19:58.345 "sha512" 00:19:58.345 ], 00:19:58.345 "dhchap_dhgroups": [ 00:19:58.345 "null", 00:19:58.345 "ffdhe2048", 00:19:58.345 "ffdhe3072", 00:19:58.345 "ffdhe4096", 00:19:58.345 "ffdhe6144", 00:19:58.345 "ffdhe8192" 00:19:58.345 ] 00:19:58.345 } 00:19:58.345 }, 00:19:58.345 { 00:19:58.346 "method": "bdev_nvme_set_hotplug", 00:19:58.346 "params": { 00:19:58.346 "period_us": 100000, 00:19:58.346 "enable": false 00:19:58.346 } 00:19:58.346 }, 00:19:58.346 { 00:19:58.346 "method": "bdev_malloc_create", 00:19:58.346 "params": { 00:19:58.346 "name": "malloc0", 00:19:58.346 "num_blocks": 8192, 00:19:58.346 "block_size": 4096, 00:19:58.346 "physical_block_size": 4096, 00:19:58.346 "uuid": "45558f29-9abc-4b87-8595-2ad8159d3896", 00:19:58.346 "optimal_io_boundary": 0, 00:19:58.346 "md_size": 0, 00:19:58.346 "dif_type": 0, 00:19:58.346 "dif_is_head_of_md": false, 00:19:58.346 "dif_pi_format": 0 00:19:58.346 } 00:19:58.346 }, 00:19:58.346 { 00:19:58.346 "method": "bdev_wait_for_examine" 00:19:58.346 } 00:19:58.346 ] 00:19:58.346 }, 00:19:58.346 { 00:19:58.346 "subsystem": "nbd", 00:19:58.346 "config": [] 00:19:58.346 }, 00:19:58.346 { 00:19:58.346 "subsystem": "scheduler", 00:19:58.346 "config": [ 00:19:58.346 { 00:19:58.346 "method": "framework_set_scheduler", 00:19:58.346 "params": { 00:19:58.346 "name": "static" 00:19:58.346 } 00:19:58.346 } 00:19:58.346 ] 00:19:58.346 }, 00:19:58.346 { 00:19:58.346 "subsystem": "nvmf", 00:19:58.346 "config": [ 00:19:58.346 { 00:19:58.346 "method": "nvmf_set_config", 00:19:58.346 "params": { 00:19:58.346 "discovery_filter": "match_any", 00:19:58.346 "admin_cmd_passthru": { 00:19:58.346 "identify_ctrlr": false 00:19:58.346 }, 00:19:58.346 "dhchap_digests": [ 00:19:58.346 "sha256", 00:19:58.346 "sha384", 00:19:58.346 "sha512" 00:19:58.346 ], 00:19:58.346 "dhchap_dhgroups": [ 00:19:58.346 "null", 00:19:58.346 "ffdhe2048", 00:19:58.346 "ffdhe3072", 00:19:58.346 "ffdhe4096", 00:19:58.346 "ffdhe6144", 00:19:58.346 "ffdhe8192" 00:19:58.346 ] 00:19:58.346 } 00:19:58.346 }, 00:19:58.346 { 00:19:58.346 "method": "nvmf_set_max_subsystems", 00:19:58.346 "params": { 00:19:58.346 "max_subsystems": 1024 00:19:58.346 } 00:19:58.346 }, 00:19:58.346 { 00:19:58.346 "method": "nvmf_set_crdt", 00:19:58.346 "params": { 00:19:58.346 "crdt1": 0, 00:19:58.346 "crdt2": 0, 00:19:58.346 "crdt3": 0 00:19:58.346 } 00:19:58.346 }, 00:19:58.346 { 00:19:58.346 "method": "nvmf_create_transport", 00:19:58.346 "params": { 00:19:58.346 "trtype": "TCP", 00:19:58.346 "max_queue_depth": 128, 00:19:58.346 "max_io_qpairs_per_ctrlr": 127, 00:19:58.346 "in_capsule_data_size": 4096, 00:19:58.346 "max_io_size": 131072, 00:19:58.346 "io_unit_size": 131072, 00:19:58.346 "max_aq_depth": 128, 00:19:58.346 "num_shared_buffers": 511, 00:19:58.346 "buf_cache_size": 4294967295, 00:19:58.346 "dif_insert_or_strip": false, 00:19:58.346 "zcopy": false, 00:19:58.346 "c2h_success": false, 00:19:58.346 "sock_priority": 0, 00:19:58.346 "abort_timeout_sec": 1, 00:19:58.346 "ack_timeout": 0, 00:19:58.346 "data_wr_pool_size": 0 00:19:58.346 } 00:19:58.346 }, 00:19:58.346 { 00:19:58.346 "method": "nvmf_create_subsystem", 00:19:58.346 "params": { 00:19:58.346 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.346 "allow_any_host": false, 00:19:58.346 "serial_number": "SPDK00000000000001", 00:19:58.346 "model_number": "SPDK bdev Controller", 00:19:58.346 "max_namespaces": 10, 00:19:58.346 "min_cntlid": 1, 00:19:58.346 "max_cntlid": 65519, 00:19:58.346 "ana_reporting": false 00:19:58.346 } 00:19:58.346 }, 00:19:58.346 { 00:19:58.346 "method": "nvmf_subsystem_add_host", 00:19:58.346 "params": { 00:19:58.346 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.346 "host": "nqn.2016-06.io.spdk:host1", 00:19:58.346 "psk": "key0" 00:19:58.346 } 00:19:58.346 }, 00:19:58.346 { 00:19:58.346 "method": "nvmf_subsystem_add_ns", 00:19:58.346 "params": { 00:19:58.346 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.346 "namespace": { 00:19:58.346 "nsid": 1, 00:19:58.346 "bdev_name": "malloc0", 00:19:58.346 "nguid": "45558F299ABC4B8785952AD8159D3896", 00:19:58.346 "uuid": "45558f29-9abc-4b87-8595-2ad8159d3896", 00:19:58.346 "no_auto_visible": false 00:19:58.346 } 00:19:58.346 } 00:19:58.346 }, 00:19:58.346 { 00:19:58.346 "method": "nvmf_subsystem_add_listener", 00:19:58.346 "params": { 00:19:58.346 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.346 "listen_address": { 00:19:58.346 "trtype": "TCP", 00:19:58.346 "adrfam": "IPv4", 00:19:58.346 "traddr": "10.0.0.2", 00:19:58.346 "trsvcid": "4420" 00:19:58.346 }, 00:19:58.346 "secure_channel": true 00:19:58.346 } 00:19:58.346 } 00:19:58.346 ] 00:19:58.346 } 00:19:58.346 ] 00:19:58.346 }' 00:19:58.346 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3419804 00:19:58.346 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3419804 00:19:58.346 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:58.346 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3419804 ']' 00:19:58.346 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.346 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:58.346 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.346 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:58.346 07:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.346 [2024-11-20 07:34:16.417454] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:19:58.346 [2024-11-20 07:34:16.417508] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.346 [2024-11-20 07:34:16.506687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.346 [2024-11-20 07:34:16.538428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.346 [2024-11-20 07:34:16.538463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.346 [2024-11-20 07:34:16.538469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.346 [2024-11-20 07:34:16.538474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.346 [2024-11-20 07:34:16.538481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.346 [2024-11-20 07:34:16.538990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.606 [2024-11-20 07:34:16.733470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.606 [2024-11-20 07:34:16.765495] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:58.606 [2024-11-20 07:34:16.765686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.178 07:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:59.178 07:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:59.178 07:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:59.178 07:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:59.178 07:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.178 07:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.178 07:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3420152 00:19:59.178 07:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3420152 /var/tmp/bdevperf.sock 00:19:59.178 07:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3420152 ']' 00:19:59.178 07:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.178 07:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:59.178 07:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.178 07:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:59.178 07:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:59.178 07:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.178 07:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:59.178 "subsystems": [ 00:19:59.178 { 00:19:59.178 "subsystem": "keyring", 00:19:59.178 "config": [ 00:19:59.178 { 00:19:59.178 "method": "keyring_file_add_key", 00:19:59.178 "params": { 00:19:59.178 "name": "key0", 00:19:59.178 "path": "/tmp/tmp.5ARcpBe19W" 00:19:59.178 } 00:19:59.178 } 00:19:59.178 ] 00:19:59.178 }, 00:19:59.178 { 00:19:59.178 "subsystem": "iobuf", 00:19:59.178 "config": [ 00:19:59.178 { 00:19:59.178 "method": "iobuf_set_options", 00:19:59.178 "params": { 00:19:59.178 "small_pool_count": 8192, 00:19:59.178 "large_pool_count": 1024, 00:19:59.178 "small_bufsize": 8192, 00:19:59.178 "large_bufsize": 135168, 00:19:59.178 "enable_numa": false 00:19:59.178 } 00:19:59.178 } 00:19:59.178 ] 00:19:59.178 }, 00:19:59.178 { 00:19:59.178 "subsystem": "sock", 00:19:59.178 "config": [ 00:19:59.178 { 00:19:59.178 "method": "sock_set_default_impl", 00:19:59.178 "params": { 00:19:59.178 "impl_name": "posix" 00:19:59.178 } 00:19:59.178 }, 00:19:59.178 { 00:19:59.178 "method": "sock_impl_set_options", 00:19:59.178 "params": { 00:19:59.178 "impl_name": "ssl", 00:19:59.178 "recv_buf_size": 4096, 00:19:59.178 "send_buf_size": 4096, 00:19:59.178 "enable_recv_pipe": true, 00:19:59.178 "enable_quickack": false, 00:19:59.178 "enable_placement_id": 0, 00:19:59.178 "enable_zerocopy_send_server": true, 00:19:59.178 "enable_zerocopy_send_client": false, 00:19:59.178 "zerocopy_threshold": 0, 00:19:59.178 "tls_version": 0, 00:19:59.178 "enable_ktls": false 00:19:59.178 } 00:19:59.178 }, 00:19:59.178 { 00:19:59.178 "method": "sock_impl_set_options", 00:19:59.178 "params": { 00:19:59.178 "impl_name": "posix", 00:19:59.178 "recv_buf_size": 2097152, 00:19:59.178 "send_buf_size": 2097152, 00:19:59.178 "enable_recv_pipe": true, 00:19:59.178 "enable_quickack": false, 00:19:59.178 "enable_placement_id": 0, 00:19:59.178 "enable_zerocopy_send_server": true, 00:19:59.178 "enable_zerocopy_send_client": false, 00:19:59.178 "zerocopy_threshold": 0, 00:19:59.178 "tls_version": 0, 00:19:59.178 "enable_ktls": false 00:19:59.178 } 00:19:59.178 } 00:19:59.178 ] 00:19:59.178 }, 00:19:59.178 { 00:19:59.178 "subsystem": "vmd", 00:19:59.178 "config": [] 00:19:59.178 }, 00:19:59.178 { 00:19:59.178 "subsystem": "accel", 00:19:59.178 "config": [ 00:19:59.178 { 00:19:59.178 "method": "accel_set_options", 00:19:59.178 "params": { 00:19:59.178 "small_cache_size": 128, 00:19:59.178 "large_cache_size": 16, 00:19:59.178 "task_count": 2048, 00:19:59.178 "sequence_count": 2048, 00:19:59.178 "buf_count": 2048 00:19:59.178 } 00:19:59.178 } 00:19:59.178 ] 00:19:59.178 }, 00:19:59.178 { 00:19:59.178 "subsystem": "bdev", 00:19:59.178 "config": [ 00:19:59.178 { 00:19:59.178 "method": "bdev_set_options", 00:19:59.178 "params": { 00:19:59.178 "bdev_io_pool_size": 65535, 00:19:59.178 "bdev_io_cache_size": 256, 00:19:59.178 "bdev_auto_examine": true, 00:19:59.178 "iobuf_small_cache_size": 128, 00:19:59.178 "iobuf_large_cache_size": 16 00:19:59.178 } 00:19:59.178 }, 00:19:59.178 { 00:19:59.178 "method": "bdev_raid_set_options", 00:19:59.178 "params": { 00:19:59.178 "process_window_size_kb": 1024, 00:19:59.178 "process_max_bandwidth_mb_sec": 0 00:19:59.178 } 00:19:59.178 }, 00:19:59.178 { 00:19:59.178 "method": "bdev_iscsi_set_options", 00:19:59.178 "params": { 00:19:59.179 "timeout_sec": 30 00:19:59.179 } 00:19:59.179 }, 00:19:59.179 { 00:19:59.179 "method": "bdev_nvme_set_options", 00:19:59.179 "params": { 00:19:59.179 "action_on_timeout": "none", 00:19:59.179 "timeout_us": 0, 00:19:59.179 "timeout_admin_us": 0, 00:19:59.179 "keep_alive_timeout_ms": 10000, 00:19:59.179 "arbitration_burst": 0, 00:19:59.179 "low_priority_weight": 0, 00:19:59.179 "medium_priority_weight": 0, 00:19:59.179 "high_priority_weight": 0, 00:19:59.179 "nvme_adminq_poll_period_us": 10000, 00:19:59.179 "nvme_ioq_poll_period_us": 0, 00:19:59.179 "io_queue_requests": 512, 00:19:59.179 "delay_cmd_submit": true, 00:19:59.179 "transport_retry_count": 4, 00:19:59.179 "bdev_retry_count": 3, 00:19:59.179 "transport_ack_timeout": 0, 00:19:59.179 "ctrlr_loss_timeout_sec": 0, 00:19:59.179 "reconnect_delay_sec": 0, 00:19:59.179 "fast_io_fail_timeout_sec": 0, 00:19:59.179 "disable_auto_failback": false, 00:19:59.179 "generate_uuids": false, 00:19:59.179 "transport_tos": 0, 00:19:59.179 "nvme_error_stat": false, 00:19:59.179 "rdma_srq_size": 0, 00:19:59.179 "io_path_stat": false, 00:19:59.179 "allow_accel_sequence": false, 00:19:59.179 "rdma_max_cq_size": 0, 00:19:59.179 "rdma_cm_event_timeout_ms": 0, 00:19:59.179 "dhchap_digests": [ 00:19:59.179 "sha256", 00:19:59.179 "sha384", 00:19:59.179 "sha512" 00:19:59.179 ], 00:19:59.179 "dhchap_dhgroups": [ 00:19:59.179 "null", 00:19:59.179 "ffdhe2048", 00:19:59.179 "ffdhe3072", 00:19:59.179 "ffdhe4096", 00:19:59.179 "ffdhe6144", 00:19:59.179 "ffdhe8192" 00:19:59.179 ] 00:19:59.179 } 00:19:59.179 }, 00:19:59.179 { 00:19:59.179 "method": "bdev_nvme_attach_controller", 00:19:59.179 "params": { 00:19:59.179 "name": "TLSTEST", 00:19:59.179 "trtype": "TCP", 00:19:59.179 "adrfam": "IPv4", 00:19:59.179 "traddr": "10.0.0.2", 00:19:59.179 "trsvcid": "4420", 00:19:59.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.179 "prchk_reftag": false, 00:19:59.179 "prchk_guard": false, 00:19:59.179 "ctrlr_loss_timeout_sec": 0, 00:19:59.179 "reconnect_delay_sec": 0, 00:19:59.179 "fast_io_fail_timeout_sec": 0, 00:19:59.179 "psk": "key0", 00:19:59.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.179 "hdgst": false, 00:19:59.179 "ddgst": false, 00:19:59.179 "multipath": "multipath" 00:19:59.179 } 00:19:59.179 }, 00:19:59.179 { 00:19:59.179 "method": "bdev_nvme_set_hotplug", 00:19:59.179 "params": { 00:19:59.179 "period_us": 100000, 00:19:59.179 "enable": false 00:19:59.179 } 00:19:59.179 }, 00:19:59.179 { 00:19:59.179 "method": "bdev_wait_for_examine" 00:19:59.179 } 00:19:59.179 ] 00:19:59.179 }, 00:19:59.179 { 00:19:59.179 "subsystem": "nbd", 00:19:59.179 "config": [] 00:19:59.179 } 00:19:59.179 ] 00:19:59.179 }' 00:19:59.179 [2024-11-20 07:34:17.310038] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:19:59.179 [2024-11-20 07:34:17.310092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420152 ] 00:19:59.438 [2024-11-20 07:34:17.396837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.438 [2024-11-20 07:34:17.425973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.438 [2024-11-20 07:34:17.561314] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:00.007 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:00.007 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:00.007 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:00.007 Running I/O for 10 seconds... 00:20:02.331 6228.00 IOPS, 24.33 MiB/s [2024-11-20T06:34:21.483Z] 5908.00 IOPS, 23.08 MiB/s [2024-11-20T06:34:22.423Z] 5813.67 IOPS, 22.71 MiB/s [2024-11-20T06:34:23.367Z] 5978.00 IOPS, 23.35 MiB/s [2024-11-20T06:34:24.310Z] 6021.20 IOPS, 23.52 MiB/s [2024-11-20T06:34:25.252Z] 5996.50 IOPS, 23.42 MiB/s [2024-11-20T06:34:26.637Z] 6008.57 IOPS, 23.47 MiB/s [2024-11-20T06:34:27.579Z] 6061.88 IOPS, 23.68 MiB/s [2024-11-20T06:34:28.520Z] 6065.00 IOPS, 23.69 MiB/s [2024-11-20T06:34:28.520Z] 6064.70 IOPS, 23.69 MiB/s 00:20:10.310 Latency(us) 00:20:10.310 [2024-11-20T06:34:28.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.310 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:10.310 Verification LBA range: start 0x0 length 0x2000 00:20:10.310 TLSTESTn1 : 10.02 6066.52 23.70 0.00 0.00 21067.06 4532.91 23265.28 00:20:10.310 [2024-11-20T06:34:28.520Z] =================================================================================================================== 00:20:10.310 [2024-11-20T06:34:28.520Z] Total : 6066.52 23.70 0.00 0.00 21067.06 4532.91 23265.28 00:20:10.310 { 00:20:10.310 "results": [ 00:20:10.310 { 00:20:10.310 "job": "TLSTESTn1", 00:20:10.310 "core_mask": "0x4", 00:20:10.310 "workload": "verify", 00:20:10.310 "status": "finished", 00:20:10.310 "verify_range": { 00:20:10.310 "start": 0, 00:20:10.310 "length": 8192 00:20:10.310 }, 00:20:10.310 "queue_depth": 128, 00:20:10.310 "io_size": 4096, 00:20:10.310 "runtime": 10.017766, 00:20:10.310 "iops": 6066.522216629935, 00:20:10.310 "mibps": 23.697352408710685, 00:20:10.310 "io_failed": 0, 00:20:10.310 "io_timeout": 0, 00:20:10.310 "avg_latency_us": 21067.058286190688, 00:20:10.310 "min_latency_us": 4532.906666666667, 00:20:10.310 "max_latency_us": 23265.28 00:20:10.310 } 00:20:10.310 ], 00:20:10.310 "core_count": 1 00:20:10.310 } 00:20:10.310 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3420152 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3420152 ']' 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3420152 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3420152 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3420152' 00:20:10.311 killing process with pid 3420152 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3420152 00:20:10.311 Received shutdown signal, test time was about 10.000000 seconds 00:20:10.311 00:20:10.311 Latency(us) 00:20:10.311 [2024-11-20T06:34:28.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.311 [2024-11-20T06:34:28.521Z] =================================================================================================================== 00:20:10.311 [2024-11-20T06:34:28.521Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3420152 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3419804 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3419804 ']' 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3419804 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3419804 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3419804' 00:20:10.311 killing process with pid 3419804 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3419804 00:20:10.311 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3419804 00:20:10.572 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:10.572 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:10.572 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:10.572 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.572 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3422176 00:20:10.572 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:10.572 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3422176 00:20:10.572 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3422176 ']' 00:20:10.572 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.572 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:10.572 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.572 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:10.572 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.572 [2024-11-20 07:34:28.678712] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:20:10.572 [2024-11-20 07:34:28.678774] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.572 [2024-11-20 07:34:28.743727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.572 [2024-11-20 07:34:28.772308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.572 [2024-11-20 07:34:28.772337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.572 [2024-11-20 07:34:28.772344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.572 [2024-11-20 07:34:28.772349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.572 [2024-11-20 07:34:28.772353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.572 [2024-11-20 07:34:28.772833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.832 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:10.832 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:10.833 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:10.833 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:10.833 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.833 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.833 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.5ARcpBe19W 00:20:10.833 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5ARcpBe19W 00:20:10.833 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:11.094 [2024-11-20 07:34:29.054621] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.094 07:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:11.094 07:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:11.356 [2024-11-20 07:34:29.435557] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:11.356 [2024-11-20 07:34:29.435866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.356 07:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:11.618 malloc0 00:20:11.618 07:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:11.879 07:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5ARcpBe19W 00:20:11.879 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:12.141 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3422541 00:20:12.141 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:12.141 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:12.141 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3422541 /var/tmp/bdevperf.sock 00:20:12.141 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3422541 ']' 00:20:12.141 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.141 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:12.141 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.141 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:12.141 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.141 [2024-11-20 07:34:30.318346] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:20:12.141 [2024-11-20 07:34:30.318460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422541 ] 00:20:12.402 [2024-11-20 07:34:30.409822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.402 [2024-11-20 07:34:30.444273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.975 07:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:12.975 07:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:12.975 07:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5ARcpBe19W 00:20:13.236 07:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:13.496 [2024-11-20 07:34:31.471974] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.496 nvme0n1 00:20:13.496 07:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:13.496 Running I/O for 1 seconds... 00:20:14.884 5126.00 IOPS, 20.02 MiB/s 00:20:14.884 Latency(us) 00:20:14.884 [2024-11-20T06:34:33.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.884 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:14.884 Verification LBA range: start 0x0 length 0x2000 00:20:14.884 nvme0n1 : 1.05 5020.35 19.61 0.00 0.00 24982.88 6772.05 43035.31 00:20:14.884 [2024-11-20T06:34:33.094Z] =================================================================================================================== 00:20:14.884 [2024-11-20T06:34:33.094Z] Total : 5020.35 19.61 0.00 0.00 24982.88 6772.05 43035.31 00:20:14.884 { 00:20:14.884 "results": [ 00:20:14.884 { 00:20:14.884 "job": "nvme0n1", 00:20:14.884 "core_mask": "0x2", 00:20:14.884 "workload": "verify", 00:20:14.884 "status": "finished", 00:20:14.884 "verify_range": { 00:20:14.884 "start": 0, 00:20:14.884 "length": 8192 00:20:14.884 }, 00:20:14.884 "queue_depth": 128, 00:20:14.884 "io_size": 4096, 00:20:14.884 "runtime": 1.046739, 00:20:14.884 "iops": 5020.3536889329625, 00:20:14.884 "mibps": 19.610756597394385, 00:20:14.884 "io_failed": 0, 00:20:14.884 "io_timeout": 0, 00:20:14.884 "avg_latency_us": 24982.882324135746, 00:20:14.884 "min_latency_us": 6772.053333333333, 00:20:14.884 "max_latency_us": 43035.306666666664 00:20:14.884 } 00:20:14.884 ], 00:20:14.884 "core_count": 1 00:20:14.884 } 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3422541 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3422541 ']' 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3422541 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3422541 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3422541' 00:20:14.884 killing process with pid 3422541 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3422541 00:20:14.884 Received shutdown signal, test time was about 1.000000 seconds 00:20:14.884 00:20:14.884 Latency(us) 00:20:14.884 [2024-11-20T06:34:33.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.884 [2024-11-20T06:34:33.094Z] =================================================================================================================== 00:20:14.884 [2024-11-20T06:34:33.094Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3422541 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3422176 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3422176 ']' 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3422176 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3422176 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3422176' 00:20:14.884 killing process with pid 3422176 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3422176 00:20:14.884 07:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3422176 00:20:14.884 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:14.884 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:14.884 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:14.884 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.884 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3423219 00:20:14.884 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:14.884 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3423219 00:20:14.884 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3423219 ']' 00:20:14.884 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.884 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:14.884 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.884 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:14.884 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.147 [2024-11-20 07:34:33.149545] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:20:15.147 [2024-11-20 07:34:33.149597] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.147 [2024-11-20 07:34:33.245095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.147 [2024-11-20 07:34:33.279578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.147 [2024-11-20 07:34:33.279617] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.147 [2024-11-20 07:34:33.279625] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.147 [2024-11-20 07:34:33.279632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.147 [2024-11-20 07:34:33.279638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.147 [2024-11-20 07:34:33.280241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.091 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:16.091 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:16.091 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:16.091 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:16.091 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.091 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.091 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:16.091 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.091 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.091 [2024-11-20 07:34:34.003563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.091 malloc0 00:20:16.091 [2024-11-20 07:34:34.033782] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:16.091 [2024-11-20 07:34:34.034099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.091 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.091 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3423283 00:20:16.091 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3423283 /var/tmp/bdevperf.sock 00:20:16.091 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:16.091 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3423283 ']' 00:20:16.091 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.091 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:16.091 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.091 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:16.091 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.091 [2024-11-20 07:34:34.117061] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:20:16.091 [2024-11-20 07:34:34.117125] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423283 ] 00:20:16.091 [2024-11-20 07:34:34.205383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.091 [2024-11-20 07:34:34.239901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.036 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:17.036 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:17.036 07:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5ARcpBe19W 00:20:17.036 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:17.036 [2024-11-20 07:34:35.211220] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.297 nvme0n1 00:20:17.297 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:17.297 Running I/O for 1 seconds... 00:20:18.239 5652.00 IOPS, 22.08 MiB/s 00:20:18.239 Latency(us) 00:20:18.239 [2024-11-20T06:34:36.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.239 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:18.239 Verification LBA range: start 0x0 length 0x2000 00:20:18.239 nvme0n1 : 1.01 5704.99 22.29 0.00 0.00 22308.15 4614.83 28180.48 00:20:18.239 [2024-11-20T06:34:36.449Z] =================================================================================================================== 00:20:18.239 [2024-11-20T06:34:36.449Z] Total : 5704.99 22.29 0.00 0.00 22308.15 4614.83 28180.48 00:20:18.239 { 00:20:18.239 "results": [ 00:20:18.239 { 00:20:18.239 "job": "nvme0n1", 00:20:18.239 "core_mask": "0x2", 00:20:18.239 "workload": "verify", 00:20:18.239 "status": "finished", 00:20:18.239 "verify_range": { 00:20:18.239 "start": 0, 00:20:18.239 "length": 8192 00:20:18.239 }, 00:20:18.239 "queue_depth": 128, 00:20:18.239 "io_size": 4096, 00:20:18.239 "runtime": 1.013149, 00:20:18.239 "iops": 5704.985150259241, 00:20:18.239 "mibps": 22.28509824320016, 00:20:18.239 "io_failed": 0, 00:20:18.239 "io_timeout": 0, 00:20:18.240 "avg_latency_us": 22308.152987312573, 00:20:18.240 "min_latency_us": 4614.826666666667, 00:20:18.240 "max_latency_us": 28180.48 00:20:18.240 } 00:20:18.240 ], 00:20:18.240 "core_count": 1 00:20:18.240 } 00:20:18.240 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:18.240 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.240 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.500 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.500 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:18.500 "subsystems": [ 00:20:18.500 { 00:20:18.500 "subsystem": "keyring", 00:20:18.500 "config": [ 00:20:18.500 { 00:20:18.500 "method": "keyring_file_add_key", 00:20:18.500 "params": { 00:20:18.500 "name": "key0", 00:20:18.500 "path": "/tmp/tmp.5ARcpBe19W" 00:20:18.500 } 00:20:18.500 } 00:20:18.500 ] 00:20:18.500 }, 00:20:18.501 { 00:20:18.501 "subsystem": "iobuf", 00:20:18.501 "config": [ 00:20:18.501 { 00:20:18.501 "method": "iobuf_set_options", 00:20:18.501 "params": { 00:20:18.501 "small_pool_count": 8192, 00:20:18.501 "large_pool_count": 1024, 00:20:18.501 "small_bufsize": 8192, 00:20:18.501 "large_bufsize": 135168, 00:20:18.501 "enable_numa": false 00:20:18.501 } 00:20:18.501 } 00:20:18.501 ] 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "subsystem": "sock", 00:20:18.501 "config": [ 00:20:18.501 { 00:20:18.501 "method": "sock_set_default_impl", 00:20:18.501 "params": { 00:20:18.501 "impl_name": "posix" 00:20:18.501 } 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "method": "sock_impl_set_options", 00:20:18.501 "params": { 00:20:18.501 "impl_name": "ssl", 00:20:18.501 "recv_buf_size": 4096, 00:20:18.501 "send_buf_size": 4096, 00:20:18.501 "enable_recv_pipe": true, 00:20:18.501 "enable_quickack": false, 00:20:18.501 "enable_placement_id": 0, 00:20:18.501 "enable_zerocopy_send_server": true, 00:20:18.501 "enable_zerocopy_send_client": false, 00:20:18.501 "zerocopy_threshold": 0, 00:20:18.501 "tls_version": 0, 00:20:18.501 "enable_ktls": false 00:20:18.501 } 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "method": "sock_impl_set_options", 00:20:18.501 "params": { 00:20:18.501 "impl_name": "posix", 00:20:18.501 "recv_buf_size": 2097152, 00:20:18.501 "send_buf_size": 2097152, 00:20:18.501 "enable_recv_pipe": true, 00:20:18.501 "enable_quickack": false, 00:20:18.501 "enable_placement_id": 0, 00:20:18.501 "enable_zerocopy_send_server": true, 00:20:18.501 "enable_zerocopy_send_client": false, 00:20:18.501 "zerocopy_threshold": 0, 00:20:18.501 "tls_version": 0, 00:20:18.501 "enable_ktls": false 00:20:18.501 } 00:20:18.501 } 00:20:18.501 ] 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "subsystem": "vmd", 00:20:18.501 "config": [] 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "subsystem": "accel", 00:20:18.501 "config": [ 00:20:18.501 { 00:20:18.501 "method": "accel_set_options", 00:20:18.501 "params": { 00:20:18.501 "small_cache_size": 128, 00:20:18.501 "large_cache_size": 16, 00:20:18.501 "task_count": 2048, 00:20:18.501 "sequence_count": 2048, 00:20:18.501 "buf_count": 2048 00:20:18.501 } 00:20:18.501 } 00:20:18.501 ] 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "subsystem": "bdev", 00:20:18.501 "config": [ 00:20:18.501 { 00:20:18.501 "method": "bdev_set_options", 00:20:18.501 "params": { 00:20:18.501 "bdev_io_pool_size": 65535, 00:20:18.501 "bdev_io_cache_size": 256, 00:20:18.501 "bdev_auto_examine": true, 00:20:18.501 "iobuf_small_cache_size": 128, 00:20:18.501 "iobuf_large_cache_size": 16 00:20:18.501 } 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "method": "bdev_raid_set_options", 00:20:18.501 "params": { 00:20:18.501 "process_window_size_kb": 1024, 00:20:18.501 "process_max_bandwidth_mb_sec": 0 00:20:18.501 } 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "method": "bdev_iscsi_set_options", 00:20:18.501 "params": { 00:20:18.501 "timeout_sec": 30 00:20:18.501 } 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "method": "bdev_nvme_set_options", 00:20:18.501 "params": { 00:20:18.501 "action_on_timeout": "none", 00:20:18.501 "timeout_us": 0, 00:20:18.501 "timeout_admin_us": 0, 00:20:18.501 "keep_alive_timeout_ms": 10000, 00:20:18.501 "arbitration_burst": 0, 00:20:18.501 "low_priority_weight": 0, 00:20:18.501 "medium_priority_weight": 0, 00:20:18.501 "high_priority_weight": 0, 00:20:18.501 "nvme_adminq_poll_period_us": 10000, 00:20:18.501 "nvme_ioq_poll_period_us": 0, 00:20:18.501 "io_queue_requests": 0, 00:20:18.501 "delay_cmd_submit": true, 00:20:18.501 "transport_retry_count": 4, 00:20:18.501 "bdev_retry_count": 3, 00:20:18.501 "transport_ack_timeout": 0, 00:20:18.501 "ctrlr_loss_timeout_sec": 0, 00:20:18.501 "reconnect_delay_sec": 0, 00:20:18.501 "fast_io_fail_timeout_sec": 0, 00:20:18.501 "disable_auto_failback": false, 00:20:18.501 "generate_uuids": false, 00:20:18.501 "transport_tos": 0, 00:20:18.501 "nvme_error_stat": false, 00:20:18.501 "rdma_srq_size": 0, 00:20:18.501 "io_path_stat": false, 00:20:18.501 "allow_accel_sequence": false, 00:20:18.501 "rdma_max_cq_size": 0, 00:20:18.501 "rdma_cm_event_timeout_ms": 0, 00:20:18.501 "dhchap_digests": [ 00:20:18.501 "sha256", 00:20:18.501 "sha384", 00:20:18.501 "sha512" 00:20:18.501 ], 00:20:18.501 "dhchap_dhgroups": [ 00:20:18.501 "null", 00:20:18.501 "ffdhe2048", 00:20:18.501 "ffdhe3072", 00:20:18.501 "ffdhe4096", 00:20:18.501 "ffdhe6144", 00:20:18.501 "ffdhe8192" 00:20:18.501 ] 00:20:18.501 } 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "method": "bdev_nvme_set_hotplug", 00:20:18.501 "params": { 00:20:18.501 "period_us": 100000, 00:20:18.501 "enable": false 00:20:18.501 } 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "method": "bdev_malloc_create", 00:20:18.501 "params": { 00:20:18.501 "name": "malloc0", 00:20:18.501 "num_blocks": 8192, 00:20:18.501 "block_size": 4096, 00:20:18.501 "physical_block_size": 4096, 00:20:18.501 "uuid": "f0065ab7-0984-4cfb-b22e-c891f6bb2aa2", 00:20:18.501 "optimal_io_boundary": 0, 00:20:18.501 "md_size": 0, 00:20:18.501 "dif_type": 0, 00:20:18.501 "dif_is_head_of_md": false, 00:20:18.501 "dif_pi_format": 0 00:20:18.501 } 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "method": "bdev_wait_for_examine" 00:20:18.501 } 00:20:18.501 ] 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "subsystem": "nbd", 00:20:18.501 "config": [] 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "subsystem": "scheduler", 00:20:18.501 "config": [ 00:20:18.501 { 00:20:18.501 "method": "framework_set_scheduler", 00:20:18.501 "params": { 00:20:18.501 "name": "static" 00:20:18.501 } 00:20:18.501 } 00:20:18.501 ] 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "subsystem": "nvmf", 00:20:18.501 "config": [ 00:20:18.501 { 00:20:18.501 "method": "nvmf_set_config", 00:20:18.501 "params": { 00:20:18.501 "discovery_filter": "match_any", 00:20:18.501 "admin_cmd_passthru": { 00:20:18.501 "identify_ctrlr": false 00:20:18.501 }, 00:20:18.501 "dhchap_digests": [ 00:20:18.501 "sha256", 00:20:18.501 "sha384", 00:20:18.501 "sha512" 00:20:18.501 ], 00:20:18.501 "dhchap_dhgroups": [ 00:20:18.501 "null", 00:20:18.501 "ffdhe2048", 00:20:18.501 "ffdhe3072", 00:20:18.501 "ffdhe4096", 00:20:18.501 "ffdhe6144", 00:20:18.501 "ffdhe8192" 00:20:18.501 ] 00:20:18.501 } 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "method": "nvmf_set_max_subsystems", 00:20:18.501 "params": { 00:20:18.501 "max_subsystems": 1024 00:20:18.501 } 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "method": "nvmf_set_crdt", 00:20:18.501 "params": { 00:20:18.501 "crdt1": 0, 00:20:18.501 "crdt2": 0, 00:20:18.501 "crdt3": 0 00:20:18.501 } 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "method": "nvmf_create_transport", 00:20:18.501 "params": { 00:20:18.501 "trtype": "TCP", 00:20:18.501 "max_queue_depth": 128, 00:20:18.501 "max_io_qpairs_per_ctrlr": 127, 00:20:18.501 "in_capsule_data_size": 4096, 00:20:18.501 "max_io_size": 131072, 00:20:18.501 "io_unit_size": 131072, 00:20:18.501 "max_aq_depth": 128, 00:20:18.501 "num_shared_buffers": 511, 00:20:18.501 "buf_cache_size": 4294967295, 00:20:18.501 "dif_insert_or_strip": false, 00:20:18.501 "zcopy": false, 00:20:18.501 "c2h_success": false, 00:20:18.501 "sock_priority": 0, 00:20:18.501 "abort_timeout_sec": 1, 00:20:18.501 "ack_timeout": 0, 00:20:18.501 "data_wr_pool_size": 0 00:20:18.501 } 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "method": "nvmf_create_subsystem", 00:20:18.501 "params": { 00:20:18.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.501 "allow_any_host": false, 00:20:18.501 "serial_number": "00000000000000000000", 00:20:18.501 "model_number": "SPDK bdev Controller", 00:20:18.501 "max_namespaces": 32, 00:20:18.501 "min_cntlid": 1, 00:20:18.501 "max_cntlid": 65519, 00:20:18.501 "ana_reporting": false 00:20:18.501 } 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "method": "nvmf_subsystem_add_host", 00:20:18.501 "params": { 00:20:18.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.501 "host": "nqn.2016-06.io.spdk:host1", 00:20:18.501 "psk": "key0" 00:20:18.501 } 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "method": "nvmf_subsystem_add_ns", 00:20:18.501 "params": { 00:20:18.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.501 "namespace": { 00:20:18.501 "nsid": 1, 00:20:18.501 "bdev_name": "malloc0", 00:20:18.501 "nguid": "F0065AB709844CFBB22EC891F6BB2AA2", 00:20:18.502 "uuid": "f0065ab7-0984-4cfb-b22e-c891f6bb2aa2", 00:20:18.502 "no_auto_visible": false 00:20:18.502 } 00:20:18.502 } 00:20:18.502 }, 00:20:18.502 { 00:20:18.502 "method": "nvmf_subsystem_add_listener", 00:20:18.502 "params": { 00:20:18.502 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.502 "listen_address": { 00:20:18.502 "trtype": "TCP", 00:20:18.502 "adrfam": "IPv4", 00:20:18.502 "traddr": "10.0.0.2", 00:20:18.502 "trsvcid": "4420" 00:20:18.502 }, 00:20:18.502 "secure_channel": false, 00:20:18.502 "sock_impl": "ssl" 00:20:18.502 } 00:20:18.502 } 00:20:18.502 ] 00:20:18.502 } 00:20:18.502 ] 00:20:18.502 }' 00:20:18.502 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:18.763 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:18.763 "subsystems": [ 00:20:18.763 { 00:20:18.763 "subsystem": "keyring", 00:20:18.763 "config": [ 00:20:18.763 { 00:20:18.763 "method": "keyring_file_add_key", 00:20:18.763 "params": { 00:20:18.763 "name": "key0", 00:20:18.763 "path": "/tmp/tmp.5ARcpBe19W" 00:20:18.763 } 00:20:18.763 } 00:20:18.763 ] 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "subsystem": "iobuf", 00:20:18.763 "config": [ 00:20:18.763 { 00:20:18.763 "method": "iobuf_set_options", 00:20:18.763 "params": { 00:20:18.763 "small_pool_count": 8192, 00:20:18.763 "large_pool_count": 1024, 00:20:18.763 "small_bufsize": 8192, 00:20:18.763 "large_bufsize": 135168, 00:20:18.763 "enable_numa": false 00:20:18.763 } 00:20:18.763 } 00:20:18.763 ] 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "subsystem": "sock", 00:20:18.763 "config": [ 00:20:18.763 { 00:20:18.763 "method": "sock_set_default_impl", 00:20:18.763 "params": { 00:20:18.763 "impl_name": "posix" 00:20:18.763 } 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "method": "sock_impl_set_options", 00:20:18.763 "params": { 00:20:18.763 "impl_name": "ssl", 00:20:18.763 "recv_buf_size": 4096, 00:20:18.763 "send_buf_size": 4096, 00:20:18.763 "enable_recv_pipe": true, 00:20:18.763 "enable_quickack": false, 00:20:18.763 "enable_placement_id": 0, 00:20:18.763 "enable_zerocopy_send_server": true, 00:20:18.763 "enable_zerocopy_send_client": false, 00:20:18.763 "zerocopy_threshold": 0, 00:20:18.763 "tls_version": 0, 00:20:18.763 "enable_ktls": false 00:20:18.763 } 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "method": "sock_impl_set_options", 00:20:18.763 "params": { 00:20:18.763 "impl_name": "posix", 00:20:18.763 "recv_buf_size": 2097152, 00:20:18.763 "send_buf_size": 2097152, 00:20:18.763 "enable_recv_pipe": true, 00:20:18.763 "enable_quickack": false, 00:20:18.763 "enable_placement_id": 0, 00:20:18.763 "enable_zerocopy_send_server": true, 00:20:18.763 "enable_zerocopy_send_client": false, 00:20:18.763 "zerocopy_threshold": 0, 00:20:18.763 "tls_version": 0, 00:20:18.763 "enable_ktls": false 00:20:18.763 } 00:20:18.763 } 00:20:18.763 ] 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "subsystem": "vmd", 00:20:18.763 "config": [] 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "subsystem": "accel", 00:20:18.763 "config": [ 00:20:18.763 { 00:20:18.763 "method": "accel_set_options", 00:20:18.763 "params": { 00:20:18.763 "small_cache_size": 128, 00:20:18.763 "large_cache_size": 16, 00:20:18.763 "task_count": 2048, 00:20:18.763 "sequence_count": 2048, 00:20:18.763 "buf_count": 2048 00:20:18.763 } 00:20:18.763 } 00:20:18.763 ] 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "subsystem": "bdev", 00:20:18.763 "config": [ 00:20:18.763 { 00:20:18.764 "method": "bdev_set_options", 00:20:18.764 "params": { 00:20:18.764 "bdev_io_pool_size": 65535, 00:20:18.764 "bdev_io_cache_size": 256, 00:20:18.764 "bdev_auto_examine": true, 00:20:18.764 "iobuf_small_cache_size": 128, 00:20:18.764 "iobuf_large_cache_size": 16 00:20:18.764 } 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "method": "bdev_raid_set_options", 00:20:18.764 "params": { 00:20:18.764 "process_window_size_kb": 1024, 00:20:18.764 "process_max_bandwidth_mb_sec": 0 00:20:18.764 } 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "method": "bdev_iscsi_set_options", 00:20:18.764 "params": { 00:20:18.764 "timeout_sec": 30 00:20:18.764 } 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "method": "bdev_nvme_set_options", 00:20:18.764 "params": { 00:20:18.764 "action_on_timeout": "none", 00:20:18.764 "timeout_us": 0, 00:20:18.764 "timeout_admin_us": 0, 00:20:18.764 "keep_alive_timeout_ms": 10000, 00:20:18.764 "arbitration_burst": 0, 00:20:18.764 "low_priority_weight": 0, 00:20:18.764 "medium_priority_weight": 0, 00:20:18.764 "high_priority_weight": 0, 00:20:18.764 "nvme_adminq_poll_period_us": 10000, 00:20:18.764 "nvme_ioq_poll_period_us": 0, 00:20:18.764 "io_queue_requests": 512, 00:20:18.764 "delay_cmd_submit": true, 00:20:18.764 "transport_retry_count": 4, 00:20:18.764 "bdev_retry_count": 3, 00:20:18.764 "transport_ack_timeout": 0, 00:20:18.764 "ctrlr_loss_timeout_sec": 0, 00:20:18.764 "reconnect_delay_sec": 0, 00:20:18.764 "fast_io_fail_timeout_sec": 0, 00:20:18.764 "disable_auto_failback": false, 00:20:18.764 "generate_uuids": false, 00:20:18.764 "transport_tos": 0, 00:20:18.764 "nvme_error_stat": false, 00:20:18.764 "rdma_srq_size": 0, 00:20:18.764 "io_path_stat": false, 00:20:18.764 "allow_accel_sequence": false, 00:20:18.764 "rdma_max_cq_size": 0, 00:20:18.764 "rdma_cm_event_timeout_ms": 0, 00:20:18.764 "dhchap_digests": [ 00:20:18.764 "sha256", 00:20:18.764 "sha384", 00:20:18.764 "sha512" 00:20:18.764 ], 00:20:18.764 "dhchap_dhgroups": [ 00:20:18.764 "null", 00:20:18.764 "ffdhe2048", 00:20:18.764 "ffdhe3072", 00:20:18.764 "ffdhe4096", 00:20:18.764 "ffdhe6144", 00:20:18.764 "ffdhe8192" 00:20:18.764 ] 00:20:18.764 } 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "method": "bdev_nvme_attach_controller", 00:20:18.764 "params": { 00:20:18.764 "name": "nvme0", 00:20:18.764 "trtype": "TCP", 00:20:18.764 "adrfam": "IPv4", 00:20:18.764 "traddr": "10.0.0.2", 00:20:18.764 "trsvcid": "4420", 00:20:18.764 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.764 "prchk_reftag": false, 00:20:18.764 "prchk_guard": false, 00:20:18.764 "ctrlr_loss_timeout_sec": 0, 00:20:18.764 "reconnect_delay_sec": 0, 00:20:18.764 "fast_io_fail_timeout_sec": 0, 00:20:18.764 "psk": "key0", 00:20:18.764 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.764 "hdgst": false, 00:20:18.764 "ddgst": false, 00:20:18.764 "multipath": "multipath" 00:20:18.764 } 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "method": "bdev_nvme_set_hotplug", 00:20:18.764 "params": { 00:20:18.764 "period_us": 100000, 00:20:18.764 "enable": false 00:20:18.764 } 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "method": "bdev_enable_histogram", 00:20:18.764 "params": { 00:20:18.764 "name": "nvme0n1", 00:20:18.764 "enable": true 00:20:18.764 } 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "method": "bdev_wait_for_examine" 00:20:18.764 } 00:20:18.764 ] 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "subsystem": "nbd", 00:20:18.764 "config": [] 00:20:18.764 } 00:20:18.764 ] 00:20:18.764 }' 00:20:18.764 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3423283 00:20:18.764 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3423283 ']' 00:20:18.764 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3423283 00:20:18.764 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:18.764 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:18.764 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3423283 00:20:18.764 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:18.764 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:18.764 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3423283' 00:20:18.764 killing process with pid 3423283 00:20:18.764 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3423283 00:20:18.764 Received shutdown signal, test time was about 1.000000 seconds 00:20:18.764 00:20:18.764 Latency(us) 00:20:18.764 [2024-11-20T06:34:36.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.764 [2024-11-20T06:34:36.974Z] =================================================================================================================== 00:20:18.764 [2024-11-20T06:34:36.974Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.764 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3423283 00:20:18.764 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3423219 00:20:18.764 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3423219 ']' 00:20:18.764 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3423219 00:20:18.764 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:18.764 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:18.764 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3423219 00:20:19.026 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:19.026 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:19.026 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3423219' 00:20:19.026 killing process with pid 3423219 00:20:19.026 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3423219 00:20:19.026 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3423219 00:20:19.026 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:19.026 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:19.026 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:19.026 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.026 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:19.026 "subsystems": [ 00:20:19.026 { 00:20:19.026 "subsystem": "keyring", 00:20:19.026 "config": [ 00:20:19.026 { 00:20:19.026 "method": "keyring_file_add_key", 00:20:19.026 "params": { 00:20:19.026 "name": "key0", 00:20:19.026 "path": "/tmp/tmp.5ARcpBe19W" 00:20:19.026 } 00:20:19.026 } 00:20:19.026 ] 00:20:19.026 }, 00:20:19.026 { 00:20:19.026 "subsystem": "iobuf", 00:20:19.026 "config": [ 00:20:19.026 { 00:20:19.026 "method": "iobuf_set_options", 00:20:19.026 "params": { 00:20:19.026 "small_pool_count": 8192, 00:20:19.026 "large_pool_count": 1024, 00:20:19.026 "small_bufsize": 8192, 00:20:19.026 "large_bufsize": 135168, 00:20:19.026 "enable_numa": false 00:20:19.026 } 00:20:19.026 } 00:20:19.026 ] 00:20:19.026 }, 00:20:19.026 { 00:20:19.026 "subsystem": "sock", 00:20:19.026 "config": [ 00:20:19.026 { 00:20:19.026 "method": "sock_set_default_impl", 00:20:19.026 "params": { 00:20:19.026 "impl_name": "posix" 00:20:19.026 } 00:20:19.026 }, 00:20:19.026 { 00:20:19.026 "method": "sock_impl_set_options", 00:20:19.026 "params": { 00:20:19.026 "impl_name": "ssl", 00:20:19.026 "recv_buf_size": 4096, 00:20:19.026 "send_buf_size": 4096, 00:20:19.026 "enable_recv_pipe": true, 00:20:19.026 "enable_quickack": false, 00:20:19.026 "enable_placement_id": 0, 00:20:19.026 "enable_zerocopy_send_server": true, 00:20:19.026 "enable_zerocopy_send_client": false, 00:20:19.026 "zerocopy_threshold": 0, 00:20:19.026 "tls_version": 0, 00:20:19.026 "enable_ktls": false 00:20:19.026 } 00:20:19.026 }, 00:20:19.026 { 00:20:19.026 "method": "sock_impl_set_options", 00:20:19.026 "params": { 00:20:19.026 "impl_name": "posix", 00:20:19.026 "recv_buf_size": 2097152, 00:20:19.026 "send_buf_size": 2097152, 00:20:19.026 "enable_recv_pipe": true, 00:20:19.026 "enable_quickack": false, 00:20:19.026 "enable_placement_id": 0, 00:20:19.026 "enable_zerocopy_send_server": true, 00:20:19.026 "enable_zerocopy_send_client": false, 00:20:19.026 "zerocopy_threshold": 0, 00:20:19.026 "tls_version": 0, 00:20:19.026 "enable_ktls": false 00:20:19.026 } 00:20:19.026 } 00:20:19.026 ] 00:20:19.026 }, 00:20:19.026 { 00:20:19.026 "subsystem": "vmd", 00:20:19.026 "config": [] 00:20:19.026 }, 00:20:19.026 { 00:20:19.026 "subsystem": "accel", 00:20:19.026 "config": [ 00:20:19.026 { 00:20:19.026 "method": "accel_set_options", 00:20:19.026 "params": { 00:20:19.026 "small_cache_size": 128, 00:20:19.026 "large_cache_size": 16, 00:20:19.026 "task_count": 2048, 00:20:19.026 "sequence_count": 2048, 00:20:19.026 "buf_count": 2048 00:20:19.026 } 00:20:19.026 } 00:20:19.026 ] 00:20:19.026 }, 00:20:19.026 { 00:20:19.026 "subsystem": "bdev", 00:20:19.026 "config": [ 00:20:19.026 { 00:20:19.026 "method": "bdev_set_options", 00:20:19.026 "params": { 00:20:19.026 "bdev_io_pool_size": 65535, 00:20:19.026 "bdev_io_cache_size": 256, 00:20:19.026 "bdev_auto_examine": true, 00:20:19.026 "iobuf_small_cache_size": 128, 00:20:19.026 "iobuf_large_cache_size": 16 00:20:19.026 } 00:20:19.026 }, 00:20:19.026 { 00:20:19.026 "method": "bdev_raid_set_options", 00:20:19.026 "params": { 00:20:19.026 "process_window_size_kb": 1024, 00:20:19.026 "process_max_bandwidth_mb_sec": 0 00:20:19.026 } 00:20:19.026 }, 00:20:19.026 { 00:20:19.026 "method": "bdev_iscsi_set_options", 00:20:19.026 "params": { 00:20:19.026 "timeout_sec": 30 00:20:19.026 } 00:20:19.026 }, 00:20:19.026 { 00:20:19.026 "method": "bdev_nvme_set_options", 00:20:19.026 "params": { 00:20:19.026 "action_on_timeout": "none", 00:20:19.026 "timeout_us": 0, 00:20:19.026 "timeout_admin_us": 0, 00:20:19.026 "keep_alive_timeout_ms": 10000, 00:20:19.026 "arbitration_burst": 0, 00:20:19.026 "low_priority_weight": 0, 00:20:19.026 "medium_priority_weight": 0, 00:20:19.026 "high_priority_weight": 0, 00:20:19.026 "nvme_adminq_poll_period_us": 10000, 00:20:19.026 "nvme_ioq_poll_period_us": 0, 00:20:19.026 "io_queue_requests": 0, 00:20:19.026 "delay_cmd_submit": true, 00:20:19.026 "transport_retry_count": 4, 00:20:19.026 "bdev_retry_count": 3, 00:20:19.026 "transport_ack_timeout": 0, 00:20:19.026 "ctrlr_loss_timeout_sec": 0, 00:20:19.026 "reconnect_delay_sec": 0, 00:20:19.026 "fast_io_fail_timeout_sec": 0, 00:20:19.026 "disable_auto_failback": false, 00:20:19.026 "generate_uuids": false, 00:20:19.026 "transport_tos": 0, 00:20:19.026 "nvme_error_stat": false, 00:20:19.027 "rdma_srq_size": 0, 00:20:19.027 "io_path_stat": false, 00:20:19.027 "allow_accel_sequence": false, 00:20:19.027 "rdma_max_cq_size": 0, 00:20:19.027 "rdma_cm_event_timeout_ms": 0, 00:20:19.027 "dhchap_digests": [ 00:20:19.027 "sha256", 00:20:19.027 "sha384", 00:20:19.027 "sha512" 00:20:19.027 ], 00:20:19.027 "dhchap_dhgroups": [ 00:20:19.027 "null", 00:20:19.027 "ffdhe2048", 00:20:19.027 "ffdhe3072", 00:20:19.027 "ffdhe4096", 00:20:19.027 "ffdhe6144", 00:20:19.027 "ffdhe8192" 00:20:19.027 ] 00:20:19.027 } 00:20:19.027 }, 00:20:19.027 { 00:20:19.027 "method": "bdev_nvme_set_hotplug", 00:20:19.027 "params": { 00:20:19.027 "period_us": 100000, 00:20:19.027 "enable": false 00:20:19.027 } 00:20:19.027 }, 00:20:19.027 { 00:20:19.027 "method": "bdev_malloc_create", 00:20:19.027 "params": { 00:20:19.027 "name": "malloc0", 00:20:19.027 "num_blocks": 8192, 00:20:19.027 "block_size": 4096, 00:20:19.027 "physical_block_size": 4096, 00:20:19.027 "uuid": "f0065ab7-0984-4cfb-b22e-c891f6bb2aa2", 00:20:19.027 "optimal_io_boundary": 0, 00:20:19.027 "md_size": 0, 00:20:19.027 "dif_type": 0, 00:20:19.027 "dif_is_head_of_md": false, 00:20:19.027 "dif_pi_format": 0 00:20:19.027 } 00:20:19.027 }, 00:20:19.027 { 00:20:19.027 "method": "bdev_wait_for_examine" 00:20:19.027 } 00:20:19.027 ] 00:20:19.027 }, 00:20:19.027 { 00:20:19.027 "subsystem": "nbd", 00:20:19.027 "config": [] 00:20:19.027 }, 00:20:19.027 { 00:20:19.027 "subsystem": "scheduler", 00:20:19.027 "config": [ 00:20:19.027 { 00:20:19.027 "method": "framework_set_scheduler", 00:20:19.027 "params": { 00:20:19.027 "name": "static" 00:20:19.027 } 00:20:19.027 } 00:20:19.027 ] 00:20:19.027 }, 00:20:19.027 { 00:20:19.027 "subsystem": "nvmf", 00:20:19.027 "config": [ 00:20:19.027 { 00:20:19.027 "method": "nvmf_set_config", 00:20:19.027 "params": { 00:20:19.027 "discovery_filter": "match_any", 00:20:19.027 "admin_cmd_passthru": { 00:20:19.027 "identify_ctrlr": false 00:20:19.027 }, 00:20:19.027 "dhchap_digests": [ 00:20:19.027 "sha256", 00:20:19.027 "sha384", 00:20:19.027 "sha512" 00:20:19.027 ], 00:20:19.027 "dhchap_dhgroups": [ 00:20:19.027 "null", 00:20:19.027 "ffdhe2048", 00:20:19.027 "ffdhe3072", 00:20:19.027 "ffdhe4096", 00:20:19.027 "ffdhe6144", 00:20:19.027 "ffdhe8192" 00:20:19.027 ] 00:20:19.027 } 00:20:19.027 }, 00:20:19.027 { 00:20:19.027 "method": "nvmf_set_max_subsystems", 00:20:19.027 "params": { 00:20:19.027 "max_subsystems": 1024 00:20:19.027 } 00:20:19.027 }, 00:20:19.027 { 00:20:19.027 "method": "nvmf_set_crdt", 00:20:19.027 "params": { 00:20:19.027 "crdt1": 0, 00:20:19.027 "crdt2": 0, 00:20:19.027 "crdt3": 0 00:20:19.027 } 00:20:19.027 }, 00:20:19.027 { 00:20:19.027 "method": "nvmf_create_transport", 00:20:19.027 "params": { 00:20:19.027 "trtype": "TCP", 00:20:19.027 "max_queue_depth": 128, 00:20:19.027 "max_io_qpairs_per_ctrlr": 127, 00:20:19.027 "in_capsule_data_size": 4096, 00:20:19.027 "max_io_size": 131072, 00:20:19.027 "io_unit_size": 131072, 00:20:19.027 "max_aq_depth": 128, 00:20:19.027 "num_shared_buffers": 511, 00:20:19.027 "buf_cache_size": 4294967295, 00:20:19.027 "dif_insert_or_strip": false, 00:20:19.027 "zcopy": false, 00:20:19.027 "c2h_success": false, 00:20:19.027 "sock_priority": 0, 00:20:19.027 "abort_timeout_sec": 1, 00:20:19.027 "ack_timeout": 0, 00:20:19.027 "data_wr_pool_size": 0 00:20:19.027 } 00:20:19.027 }, 00:20:19.027 { 00:20:19.027 "method": "nvmf_create_subsystem", 00:20:19.027 "params": { 00:20:19.027 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.027 "allow_any_host": false, 00:20:19.027 "serial_number": "00000000000000000000", 00:20:19.027 "model_number": "SPDK bdev Controller", 00:20:19.027 "max_namespaces": 32, 00:20:19.027 "min_cntlid": 1, 00:20:19.027 "max_cntlid": 65519, 00:20:19.027 "ana_reporting": false 00:20:19.027 } 00:20:19.027 }, 00:20:19.027 { 00:20:19.027 "method": "nvmf_subsystem_add_host", 00:20:19.027 "params": { 00:20:19.027 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.027 "host": "nqn.2016-06.io.spdk:host1", 00:20:19.027 "psk": "key0" 00:20:19.027 } 00:20:19.027 }, 00:20:19.027 { 00:20:19.027 "method": "nvmf_subsystem_add_ns", 00:20:19.027 "params": { 00:20:19.027 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.027 "namespace": { 00:20:19.027 "nsid": 1, 00:20:19.027 "bdev_name": "malloc0", 00:20:19.027 "nguid": "F0065AB709844CFBB22EC891F6BB2AA2", 00:20:19.027 "uuid": "f0065ab7-0984-4cfb-b22e-c891f6bb2aa2", 00:20:19.027 "no_auto_visible": false 00:20:19.027 } 00:20:19.027 } 00:20:19.027 }, 00:20:19.027 { 00:20:19.027 "method": "nvmf_subsystem_add_listener", 00:20:19.027 "params": { 00:20:19.027 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.027 "listen_address": { 00:20:19.027 "trtype": "TCP", 00:20:19.027 "adrfam": "IPv4", 00:20:19.027 "traddr": "10.0.0.2", 00:20:19.027 "trsvcid": "4420" 00:20:19.027 }, 00:20:19.027 "secure_channel": false, 00:20:19.027 "sock_impl": "ssl" 00:20:19.027 } 00:20:19.028 } 00:20:19.028 ] 00:20:19.028 } 00:20:19.028 ] 00:20:19.028 }' 00:20:19.028 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3423929 00:20:19.028 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3423929 00:20:19.028 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:19.028 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3423929 ']' 00:20:19.028 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.028 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:19.028 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.028 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:19.028 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.028 [2024-11-20 07:34:37.201113] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:20:19.028 [2024-11-20 07:34:37.201171] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.289 [2024-11-20 07:34:37.290580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.289 [2024-11-20 07:34:37.320104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.289 [2024-11-20 07:34:37.320130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.289 [2024-11-20 07:34:37.320136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.289 [2024-11-20 07:34:37.320140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.289 [2024-11-20 07:34:37.320144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.289 [2024-11-20 07:34:37.320609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.550 [2024-11-20 07:34:37.515397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.550 [2024-11-20 07:34:37.547431] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:19.550 [2024-11-20 07:34:37.547619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.811 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:19.811 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:19.811 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:19.811 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:19.811 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.073 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.073 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3424227 00:20:20.073 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3424227 /var/tmp/bdevperf.sock 00:20:20.073 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3424227 ']' 00:20:20.073 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.073 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:20.073 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.073 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:20.073 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:20.073 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.073 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:20.073 "subsystems": [ 00:20:20.073 { 00:20:20.073 "subsystem": "keyring", 00:20:20.073 "config": [ 00:20:20.073 { 00:20:20.073 "method": "keyring_file_add_key", 00:20:20.073 "params": { 00:20:20.073 "name": "key0", 00:20:20.073 "path": "/tmp/tmp.5ARcpBe19W" 00:20:20.073 } 00:20:20.073 } 00:20:20.073 ] 00:20:20.073 }, 00:20:20.073 { 00:20:20.073 "subsystem": "iobuf", 00:20:20.073 "config": [ 00:20:20.073 { 00:20:20.073 "method": "iobuf_set_options", 00:20:20.073 "params": { 00:20:20.073 "small_pool_count": 8192, 00:20:20.073 "large_pool_count": 1024, 00:20:20.073 "small_bufsize": 8192, 00:20:20.073 "large_bufsize": 135168, 00:20:20.073 "enable_numa": false 00:20:20.073 } 00:20:20.073 } 00:20:20.073 ] 00:20:20.073 }, 00:20:20.073 { 00:20:20.073 "subsystem": "sock", 00:20:20.073 "config": [ 00:20:20.073 { 00:20:20.073 "method": "sock_set_default_impl", 00:20:20.073 "params": { 00:20:20.073 "impl_name": "posix" 00:20:20.073 } 00:20:20.073 }, 00:20:20.073 { 00:20:20.073 "method": "sock_impl_set_options", 00:20:20.073 "params": { 00:20:20.073 "impl_name": "ssl", 00:20:20.073 "recv_buf_size": 4096, 00:20:20.073 "send_buf_size": 4096, 00:20:20.073 "enable_recv_pipe": true, 00:20:20.073 "enable_quickack": false, 00:20:20.073 "enable_placement_id": 0, 00:20:20.073 "enable_zerocopy_send_server": true, 00:20:20.073 "enable_zerocopy_send_client": false, 00:20:20.073 "zerocopy_threshold": 0, 00:20:20.073 "tls_version": 0, 00:20:20.073 "enable_ktls": false 00:20:20.073 } 00:20:20.073 }, 00:20:20.073 { 00:20:20.073 "method": "sock_impl_set_options", 00:20:20.073 "params": { 00:20:20.073 "impl_name": "posix", 00:20:20.073 "recv_buf_size": 2097152, 00:20:20.073 "send_buf_size": 2097152, 00:20:20.073 "enable_recv_pipe": true, 00:20:20.073 "enable_quickack": false, 00:20:20.073 "enable_placement_id": 0, 00:20:20.073 "enable_zerocopy_send_server": true, 00:20:20.073 "enable_zerocopy_send_client": false, 00:20:20.073 "zerocopy_threshold": 0, 00:20:20.073 "tls_version": 0, 00:20:20.073 "enable_ktls": false 00:20:20.073 } 00:20:20.073 } 00:20:20.073 ] 00:20:20.073 }, 00:20:20.073 { 00:20:20.073 "subsystem": "vmd", 00:20:20.073 "config": [] 00:20:20.073 }, 00:20:20.073 { 00:20:20.073 "subsystem": "accel", 00:20:20.073 "config": [ 00:20:20.073 { 00:20:20.073 "method": "accel_set_options", 00:20:20.073 "params": { 00:20:20.073 "small_cache_size": 128, 00:20:20.073 "large_cache_size": 16, 00:20:20.073 "task_count": 2048, 00:20:20.073 "sequence_count": 2048, 00:20:20.073 "buf_count": 2048 00:20:20.073 } 00:20:20.073 } 00:20:20.073 ] 00:20:20.073 }, 00:20:20.073 { 00:20:20.073 "subsystem": "bdev", 00:20:20.073 "config": [ 00:20:20.073 { 00:20:20.073 "method": "bdev_set_options", 00:20:20.073 "params": { 00:20:20.073 "bdev_io_pool_size": 65535, 00:20:20.073 "bdev_io_cache_size": 256, 00:20:20.073 "bdev_auto_examine": true, 00:20:20.073 "iobuf_small_cache_size": 128, 00:20:20.073 "iobuf_large_cache_size": 16 00:20:20.073 } 00:20:20.073 }, 00:20:20.073 { 00:20:20.073 "method": "bdev_raid_set_options", 00:20:20.073 "params": { 00:20:20.073 "process_window_size_kb": 1024, 00:20:20.073 "process_max_bandwidth_mb_sec": 0 00:20:20.073 } 00:20:20.073 }, 00:20:20.073 { 00:20:20.073 "method": "bdev_iscsi_set_options", 00:20:20.073 "params": { 00:20:20.073 "timeout_sec": 30 00:20:20.073 } 00:20:20.073 }, 00:20:20.073 { 00:20:20.073 "method": "bdev_nvme_set_options", 00:20:20.073 "params": { 00:20:20.073 "action_on_timeout": "none", 00:20:20.073 "timeout_us": 0, 00:20:20.073 "timeout_admin_us": 0, 00:20:20.073 "keep_alive_timeout_ms": 10000, 00:20:20.073 "arbitration_burst": 0, 00:20:20.073 "low_priority_weight": 0, 00:20:20.073 "medium_priority_weight": 0, 00:20:20.073 "high_priority_weight": 0, 00:20:20.073 "nvme_adminq_poll_period_us": 10000, 00:20:20.073 "nvme_ioq_poll_period_us": 0, 00:20:20.073 "io_queue_requests": 512, 00:20:20.073 "delay_cmd_submit": true, 00:20:20.073 "transport_retry_count": 4, 00:20:20.073 "bdev_retry_count": 3, 00:20:20.073 "transport_ack_timeout": 0, 00:20:20.073 "ctrlr_loss_timeout_sec": 0, 00:20:20.073 "reconnect_delay_sec": 0, 00:20:20.073 "fast_io_fail_timeout_sec": 0, 00:20:20.073 "disable_auto_failback": false, 00:20:20.073 "generate_uuids": false, 00:20:20.073 "transport_tos": 0, 00:20:20.073 "nvme_error_stat": false, 00:20:20.073 "rdma_srq_size": 0, 00:20:20.073 "io_path_stat": false, 00:20:20.073 "allow_accel_sequence": false, 00:20:20.073 "rdma_max_cq_size": 0, 00:20:20.073 "rdma_cm_event_timeout_ms": 0, 00:20:20.073 "dhchap_digests": [ 00:20:20.073 "sha256", 00:20:20.073 "sha384", 00:20:20.073 "sha512" 00:20:20.073 ], 00:20:20.073 "dhchap_dhgroups": [ 00:20:20.073 "null", 00:20:20.073 "ffdhe2048", 00:20:20.073 "ffdhe3072", 00:20:20.073 "ffdhe4096", 00:20:20.073 "ffdhe6144", 00:20:20.073 "ffdhe8192" 00:20:20.073 ] 00:20:20.073 } 00:20:20.073 }, 00:20:20.073 { 00:20:20.074 "method": "bdev_nvme_attach_controller", 00:20:20.074 "params": { 00:20:20.074 "name": "nvme0", 00:20:20.074 "trtype": "TCP", 00:20:20.074 "adrfam": "IPv4", 00:20:20.074 "traddr": "10.0.0.2", 00:20:20.074 "trsvcid": "4420", 00:20:20.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.074 "prchk_reftag": false, 00:20:20.074 "prchk_guard": false, 00:20:20.074 "ctrlr_loss_timeout_sec": 0, 00:20:20.074 "reconnect_delay_sec": 0, 00:20:20.074 "fast_io_fail_timeout_sec": 0, 00:20:20.074 "psk": "key0", 00:20:20.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.074 "hdgst": false, 00:20:20.074 "ddgst": false, 00:20:20.074 "multipath": "multipath" 00:20:20.074 } 00:20:20.074 }, 00:20:20.074 { 00:20:20.074 "method": "bdev_nvme_set_hotplug", 00:20:20.074 "params": { 00:20:20.074 "period_us": 100000, 00:20:20.074 "enable": false 00:20:20.074 } 00:20:20.074 }, 00:20:20.074 { 00:20:20.074 "method": "bdev_enable_histogram", 00:20:20.074 "params": { 00:20:20.074 "name": "nvme0n1", 00:20:20.074 "enable": true 00:20:20.074 } 00:20:20.074 }, 00:20:20.074 { 00:20:20.074 "method": "bdev_wait_for_examine" 00:20:20.074 } 00:20:20.074 ] 00:20:20.074 }, 00:20:20.074 { 00:20:20.074 "subsystem": "nbd", 00:20:20.074 "config": [] 00:20:20.074 } 00:20:20.074 ] 00:20:20.074 }' 00:20:20.074 [2024-11-20 07:34:38.074384] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:20:20.074 [2024-11-20 07:34:38.074439] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424227 ] 00:20:20.074 [2024-11-20 07:34:38.160349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.074 [2024-11-20 07:34:38.190322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.334 [2024-11-20 07:34:38.326658] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:20.955 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:20.955 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:20.955 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:20.955 07:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:20.955 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.955 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:21.246 Running I/O for 1 seconds... 00:20:22.226 4822.00 IOPS, 18.84 MiB/s 00:20:22.226 Latency(us) 00:20:22.226 [2024-11-20T06:34:40.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.226 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:22.226 Verification LBA range: start 0x0 length 0x2000 00:20:22.226 nvme0n1 : 1.03 4795.54 18.73 0.00 0.00 26349.42 4614.83 51554.99 00:20:22.226 [2024-11-20T06:34:40.436Z] =================================================================================================================== 00:20:22.226 [2024-11-20T06:34:40.436Z] Total : 4795.54 18.73 0.00 0.00 26349.42 4614.83 51554.99 00:20:22.226 { 00:20:22.226 "results": [ 00:20:22.226 { 00:20:22.226 "job": "nvme0n1", 00:20:22.226 "core_mask": "0x2", 00:20:22.226 "workload": "verify", 00:20:22.226 "status": "finished", 00:20:22.226 "verify_range": { 00:20:22.226 "start": 0, 00:20:22.226 "length": 8192 00:20:22.226 }, 00:20:22.226 "queue_depth": 128, 00:20:22.226 "io_size": 4096, 00:20:22.226 "runtime": 1.032417, 00:20:22.226 "iops": 4795.542886256232, 00:20:22.226 "mibps": 18.732589399438407, 00:20:22.226 "io_failed": 0, 00:20:22.226 "io_timeout": 0, 00:20:22.226 "avg_latency_us": 26349.416602706522, 00:20:22.227 "min_latency_us": 4614.826666666667, 00:20:22.227 "max_latency_us": 51554.986666666664 00:20:22.227 } 00:20:22.227 ], 00:20:22.227 "core_count": 1 00:20:22.227 } 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:22.227 nvmf_trace.0 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3424227 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3424227 ']' 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3424227 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3424227 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3424227' 00:20:22.227 killing process with pid 3424227 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3424227 00:20:22.227 Received shutdown signal, test time was about 1.000000 seconds 00:20:22.227 00:20:22.227 Latency(us) 00:20:22.227 [2024-11-20T06:34:40.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.227 [2024-11-20T06:34:40.437Z] =================================================================================================================== 00:20:22.227 [2024-11-20T06:34:40.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:22.227 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3424227 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:22.488 rmmod nvme_tcp 00:20:22.488 rmmod nvme_fabrics 00:20:22.488 rmmod nvme_keyring 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3423929 ']' 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3423929 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3423929 ']' 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3423929 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3423929 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3423929' 00:20:22.488 killing process with pid 3423929 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3423929 00:20:22.488 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3423929 00:20:22.749 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:22.749 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:22.749 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:22.749 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:22.749 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:22.749 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:22.749 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:22.749 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:22.749 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:22.749 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.749 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.749 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.664 07:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:24.664 07:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.xNp1qyUTrn /tmp/tmp.MkHP4hzzD3 /tmp/tmp.5ARcpBe19W 00:20:24.664 00:20:24.664 real 1m28.032s 00:20:24.664 user 2m20.138s 00:20:24.664 sys 0m26.733s 00:20:24.664 07:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:24.664 07:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.664 ************************************ 00:20:24.664 END TEST nvmf_tls 00:20:24.664 ************************************ 00:20:24.664 07:34:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:24.664 07:34:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:24.664 07:34:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:24.664 07:34:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:24.926 ************************************ 00:20:24.926 START TEST nvmf_fips 00:20:24.926 ************************************ 00:20:24.926 07:34:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:24.926 * Looking for test storage... 00:20:24.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:24.926 07:34:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:24.926 07:34:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:24.926 07:34:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:24.926 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:24.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.927 --rc genhtml_branch_coverage=1 00:20:24.927 --rc genhtml_function_coverage=1 00:20:24.927 --rc genhtml_legend=1 00:20:24.927 --rc geninfo_all_blocks=1 00:20:24.927 --rc geninfo_unexecuted_blocks=1 00:20:24.927 00:20:24.927 ' 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:24.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.927 --rc genhtml_branch_coverage=1 00:20:24.927 --rc genhtml_function_coverage=1 00:20:24.927 --rc genhtml_legend=1 00:20:24.927 --rc geninfo_all_blocks=1 00:20:24.927 --rc geninfo_unexecuted_blocks=1 00:20:24.927 00:20:24.927 ' 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:24.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.927 --rc genhtml_branch_coverage=1 00:20:24.927 --rc genhtml_function_coverage=1 00:20:24.927 --rc genhtml_legend=1 00:20:24.927 --rc geninfo_all_blocks=1 00:20:24.927 --rc geninfo_unexecuted_blocks=1 00:20:24.927 00:20:24.927 ' 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:24.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.927 --rc genhtml_branch_coverage=1 00:20:24.927 --rc genhtml_function_coverage=1 00:20:24.927 --rc genhtml_legend=1 00:20:24.927 --rc geninfo_all_blocks=1 00:20:24.927 --rc geninfo_unexecuted_blocks=1 00:20:24.927 00:20:24.927 ' 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:24.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:24.927 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:25.197 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:25.197 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:25.197 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.197 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.197 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.197 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.197 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.197 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.197 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:25.197 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:25.197 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:25.197 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.197 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:25.197 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:25.197 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.197 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.197 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:25.198 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.199 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:25.200 Error setting digest 00:20:25.200 407272F2197F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:25.200 407272F2197F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:25.200 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:33.345 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:33.345 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.345 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:33.346 Found net devices under 0000:31:00.0: cvl_0_0 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:33.346 Found net devices under 0000:31:00.1: cvl_0_1 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:33.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:20:33.346 00:20:33.346 --- 10.0.0.2 ping statistics --- 00:20:33.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.346 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:33.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:20:33.346 00:20:33.346 --- 10.0.0.1 ping statistics --- 00:20:33.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.346 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3429025 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3429025 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3429025 ']' 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:33.346 07:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:33.346 [2024-11-20 07:34:51.038285] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:20:33.346 [2024-11-20 07:34:51.038356] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.346 [2024-11-20 07:34:51.140560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.346 [2024-11-20 07:34:51.190477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.346 [2024-11-20 07:34:51.190535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.346 [2024-11-20 07:34:51.190545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.346 [2024-11-20 07:34:51.190552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.346 [2024-11-20 07:34:51.190558] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.346 [2024-11-20 07:34:51.191383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.919 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:33.919 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:33.919 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:33.919 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:33.919 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:33.919 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.919 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:33.919 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:33.919 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:33.919 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.xBd 00:20:33.919 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:33.919 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.xBd 00:20:33.919 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.xBd 00:20:33.919 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.xBd 00:20:33.919 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:33.919 [2024-11-20 07:34:52.075223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.919 [2024-11-20 07:34:52.091217] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:33.919 [2024-11-20 07:34:52.091589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.180 malloc0 00:20:34.180 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.180 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3429244 00:20:34.180 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3429244 /var/tmp/bdevperf.sock 00:20:34.180 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.180 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3429244 ']' 00:20:34.180 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.180 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:34.180 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.180 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:34.180 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:34.180 [2024-11-20 07:34:52.233797] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:20:34.180 [2024-11-20 07:34:52.233870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429244 ] 00:20:34.180 [2024-11-20 07:34:52.328610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.180 [2024-11-20 07:34:52.379409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.122 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:35.122 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:35.122 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.xBd 00:20:35.122 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:35.383 [2024-11-20 07:34:53.390585] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.383 TLSTESTn1 00:20:35.383 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:35.383 Running I/O for 10 seconds... 00:20:37.712 4102.00 IOPS, 16.02 MiB/s [2024-11-20T06:34:56.864Z] 5069.50 IOPS, 19.80 MiB/s [2024-11-20T06:34:57.805Z] 4871.67 IOPS, 19.03 MiB/s [2024-11-20T06:34:58.770Z] 5156.50 IOPS, 20.14 MiB/s [2024-11-20T06:34:59.715Z] 5144.40 IOPS, 20.10 MiB/s [2024-11-20T06:35:00.656Z] 4991.67 IOPS, 19.50 MiB/s [2024-11-20T06:35:02.040Z] 4866.43 IOPS, 19.01 MiB/s [2024-11-20T06:35:02.980Z] 5005.62 IOPS, 19.55 MiB/s [2024-11-20T06:35:03.921Z] 5145.11 IOPS, 20.10 MiB/s [2024-11-20T06:35:03.921Z] 5146.30 IOPS, 20.10 MiB/s 00:20:45.711 Latency(us) 00:20:45.711 [2024-11-20T06:35:03.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.712 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:45.712 Verification LBA range: start 0x0 length 0x2000 00:20:45.712 TLSTESTn1 : 10.04 5138.55 20.07 0.00 0.00 24845.96 5925.55 45219.84 00:20:45.712 [2024-11-20T06:35:03.922Z] =================================================================================================================== 00:20:45.712 [2024-11-20T06:35:03.922Z] Total : 5138.55 20.07 0.00 0.00 24845.96 5925.55 45219.84 00:20:45.712 { 00:20:45.712 "results": [ 00:20:45.712 { 00:20:45.712 "job": "TLSTESTn1", 00:20:45.712 "core_mask": "0x4", 00:20:45.712 "workload": "verify", 00:20:45.712 "status": "finished", 00:20:45.712 "verify_range": { 00:20:45.712 "start": 0, 00:20:45.712 "length": 8192 00:20:45.712 }, 00:20:45.712 "queue_depth": 128, 00:20:45.712 "io_size": 4096, 00:20:45.712 "runtime": 10.039793, 00:20:45.712 "iops": 5138.552159392131, 00:20:45.712 "mibps": 20.072469372625513, 00:20:45.712 "io_failed": 0, 00:20:45.712 "io_timeout": 0, 00:20:45.712 "avg_latency_us": 24845.958343348193, 00:20:45.712 "min_latency_us": 5925.546666666667, 00:20:45.712 "max_latency_us": 45219.84 00:20:45.712 } 00:20:45.712 ], 00:20:45.712 "core_count": 1 00:20:45.712 } 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:45.712 nvmf_trace.0 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3429244 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3429244 ']' 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3429244 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3429244 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3429244' 00:20:45.712 killing process with pid 3429244 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3429244 00:20:45.712 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.712 00:20:45.712 Latency(us) 00:20:45.712 [2024-11-20T06:35:03.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.712 [2024-11-20T06:35:03.922Z] =================================================================================================================== 00:20:45.712 [2024-11-20T06:35:03.922Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:45.712 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3429244 00:20:45.973 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:45.973 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:45.973 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:45.973 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:45.973 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:45.973 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:45.973 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:45.973 rmmod nvme_tcp 00:20:45.973 rmmod nvme_fabrics 00:20:45.973 rmmod nvme_keyring 00:20:45.973 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:45.973 07:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3429025 ']' 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3429025 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3429025 ']' 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3429025 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3429025 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3429025' 00:20:45.973 killing process with pid 3429025 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3429025 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3429025 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:45.973 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:46.235 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:46.235 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:46.235 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:46.235 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:46.235 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.235 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.235 07:35:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.151 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:48.151 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.xBd 00:20:48.151 00:20:48.151 real 0m23.381s 00:20:48.151 user 0m24.921s 00:20:48.151 sys 0m9.812s 00:20:48.151 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:48.151 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:48.151 ************************************ 00:20:48.151 END TEST nvmf_fips 00:20:48.151 ************************************ 00:20:48.151 07:35:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:48.151 07:35:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:48.151 07:35:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:48.151 07:35:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:48.151 ************************************ 00:20:48.151 START TEST nvmf_control_msg_list 00:20:48.151 ************************************ 00:20:48.151 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:48.413 * Looking for test storage... 00:20:48.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:48.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.413 --rc genhtml_branch_coverage=1 00:20:48.413 --rc genhtml_function_coverage=1 00:20:48.413 --rc genhtml_legend=1 00:20:48.413 --rc geninfo_all_blocks=1 00:20:48.413 --rc geninfo_unexecuted_blocks=1 00:20:48.413 00:20:48.413 ' 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:48.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.413 --rc genhtml_branch_coverage=1 00:20:48.413 --rc genhtml_function_coverage=1 00:20:48.413 --rc genhtml_legend=1 00:20:48.413 --rc geninfo_all_blocks=1 00:20:48.413 --rc geninfo_unexecuted_blocks=1 00:20:48.413 00:20:48.413 ' 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:48.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.413 --rc genhtml_branch_coverage=1 00:20:48.413 --rc genhtml_function_coverage=1 00:20:48.413 --rc genhtml_legend=1 00:20:48.413 --rc geninfo_all_blocks=1 00:20:48.413 --rc geninfo_unexecuted_blocks=1 00:20:48.413 00:20:48.413 ' 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:48.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.413 --rc genhtml_branch_coverage=1 00:20:48.413 --rc genhtml_function_coverage=1 00:20:48.413 --rc genhtml_legend=1 00:20:48.413 --rc geninfo_all_blocks=1 00:20:48.413 --rc geninfo_unexecuted_blocks=1 00:20:48.413 00:20:48.413 ' 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.413 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:48.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:48.414 07:35:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:56.561 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:56.561 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:56.562 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:56.562 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:56.562 Found net devices under 0000:31:00.0: cvl_0_0 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:56.562 Found net devices under 0000:31:00.1: cvl_0_1 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:56.562 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:56.563 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.563 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:56.563 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:56.563 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:56.563 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:56.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:20:56.563 00:20:56.563 --- 10.0.0.2 ping statistics --- 00:20:56.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.563 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:56.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:20:56.563 00:20:56.563 --- 10.0.0.1 ping statistics --- 00:20:56.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.563 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3435772 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3435772 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 3435772 ']' 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:56.563 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:56.563 [2024-11-20 07:35:14.346440] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:20:56.563 [2024-11-20 07:35:14.346504] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.563 [2024-11-20 07:35:14.447456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.563 [2024-11-20 07:35:14.498823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.563 [2024-11-20 07:35:14.498875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.563 [2024-11-20 07:35:14.498884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.563 [2024-11-20 07:35:14.498891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.563 [2024-11-20 07:35:14.498899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.563 [2024-11-20 07:35:14.499690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:57.135 [2024-11-20 07:35:15.227222] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:57.135 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.136 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:57.136 Malloc0 00:20:57.136 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.136 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:57.136 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.136 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:57.136 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.136 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:57.136 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.136 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:57.136 [2024-11-20 07:35:15.281761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.136 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.136 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3435971 00:20:57.136 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:57.136 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3435973 00:20:57.136 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:57.136 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3435974 00:20:57.136 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3435971 00:20:57.136 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:57.397 [2024-11-20 07:35:15.382656] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:57.397 [2024-11-20 07:35:15.382979] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:57.397 [2024-11-20 07:35:15.383310] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:58.339 Initializing NVMe Controllers 00:20:58.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:58.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:58.339 Initialization complete. Launching workers. 00:20:58.339 ======================================================== 00:20:58.339 Latency(us) 00:20:58.339 Device Information : IOPS MiB/s Average min max 00:20:58.339 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40901.69 40732.50 41091.21 00:20:58.339 ======================================================== 00:20:58.339 Total : 25.00 0.10 40901.69 40732.50 41091.21 00:20:58.339 00:20:58.339 Initializing NVMe Controllers 00:20:58.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:58.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:58.339 Initialization complete. Launching workers. 00:20:58.339 ======================================================== 00:20:58.339 Latency(us) 00:20:58.339 Device Information : IOPS MiB/s Average min max 00:20:58.339 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40927.70 40762.07 41421.66 00:20:58.339 ======================================================== 00:20:58.339 Total : 25.00 0.10 40927.70 40762.07 41421.66 00:20:58.339 00:20:58.339 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3435973 00:20:58.339 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3435974 00:20:58.600 Initializing NVMe Controllers 00:20:58.600 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:58.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:58.600 Initialization complete. Launching workers. 00:20:58.600 ======================================================== 00:20:58.600 Latency(us) 00:20:58.600 Device Information : IOPS MiB/s Average min max 00:20:58.600 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1502.00 5.87 665.81 313.68 876.52 00:20:58.600 ======================================================== 00:20:58.600 Total : 1502.00 5.87 665.81 313.68 876.52 00:20:58.600 00:20:58.600 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:58.600 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:58.600 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:58.600 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:58.600 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:58.600 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:58.600 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:58.600 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:58.600 rmmod nvme_tcp 00:20:58.600 rmmod nvme_fabrics 00:20:58.600 rmmod nvme_keyring 00:20:58.600 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:58.600 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:58.601 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:58.601 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3435772 ']' 00:20:58.601 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3435772 00:20:58.601 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 3435772 ']' 00:20:58.601 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 3435772 00:20:58.601 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:20:58.601 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:58.601 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3435772 00:20:58.601 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:58.601 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:58.601 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3435772' 00:20:58.601 killing process with pid 3435772 00:20:58.601 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 3435772 00:20:58.601 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 3435772 00:20:58.862 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:58.862 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:58.862 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:58.862 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:58.862 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:58.862 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:58.862 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:58.862 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:58.862 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:58.862 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.862 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.862 07:35:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.406 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:01.406 00:21:01.406 real 0m12.649s 00:21:01.406 user 0m8.066s 00:21:01.406 sys 0m6.687s 00:21:01.406 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:01.406 07:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.406 ************************************ 00:21:01.406 END TEST nvmf_control_msg_list 00:21:01.406 ************************************ 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:01.406 ************************************ 00:21:01.406 START TEST nvmf_wait_for_buf 00:21:01.406 ************************************ 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:01.406 * Looking for test storage... 00:21:01.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:01.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.406 --rc genhtml_branch_coverage=1 00:21:01.406 --rc genhtml_function_coverage=1 00:21:01.406 --rc genhtml_legend=1 00:21:01.406 --rc geninfo_all_blocks=1 00:21:01.406 --rc geninfo_unexecuted_blocks=1 00:21:01.406 00:21:01.406 ' 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:01.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.406 --rc genhtml_branch_coverage=1 00:21:01.406 --rc genhtml_function_coverage=1 00:21:01.406 --rc genhtml_legend=1 00:21:01.406 --rc geninfo_all_blocks=1 00:21:01.406 --rc geninfo_unexecuted_blocks=1 00:21:01.406 00:21:01.406 ' 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:01.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.406 --rc genhtml_branch_coverage=1 00:21:01.406 --rc genhtml_function_coverage=1 00:21:01.406 --rc genhtml_legend=1 00:21:01.406 --rc geninfo_all_blocks=1 00:21:01.406 --rc geninfo_unexecuted_blocks=1 00:21:01.406 00:21:01.406 ' 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:01.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.406 --rc genhtml_branch_coverage=1 00:21:01.406 --rc genhtml_function_coverage=1 00:21:01.406 --rc genhtml_legend=1 00:21:01.406 --rc geninfo_all_blocks=1 00:21:01.406 --rc geninfo_unexecuted_blocks=1 00:21:01.406 00:21:01.406 ' 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.406 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:01.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:01.407 07:35:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:09.550 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:09.550 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:09.550 Found net devices under 0000:31:00.0: cvl_0_0 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:09.550 Found net devices under 0000:31:00.1: cvl_0_1 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:09.550 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:09.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:21:09.551 00:21:09.551 --- 10.0.0.2 ping statistics --- 00:21:09.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.551 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:09.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:21:09.551 00:21:09.551 --- 10.0.0.1 ping statistics --- 00:21:09.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.551 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3440507 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3440507 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 3440507 ']' 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:09.551 07:35:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:09.551 [2024-11-20 07:35:27.039402] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:21:09.551 [2024-11-20 07:35:27.039466] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.551 [2024-11-20 07:35:27.141638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.551 [2024-11-20 07:35:27.191677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.551 [2024-11-20 07:35:27.191730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.551 [2024-11-20 07:35:27.191738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.551 [2024-11-20 07:35:27.191754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.551 [2024-11-20 07:35:27.191761] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.551 [2024-11-20 07:35:27.192541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.812 07:35:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:09.812 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.812 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:09.812 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.812 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.073 Malloc0 00:21:10.073 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.073 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:10.073 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.073 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.073 [2024-11-20 07:35:28.030847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.073 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.073 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:10.073 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.073 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.073 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.073 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:10.073 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.073 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.073 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.073 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:10.073 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.073 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.073 [2024-11-20 07:35:28.067175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.073 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.074 07:35:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:10.074 [2024-11-20 07:35:28.175869] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:11.990 Initializing NVMe Controllers 00:21:11.990 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:11.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:11.990 Initialization complete. Launching workers. 00:21:11.990 ======================================================== 00:21:11.990 Latency(us) 00:21:11.990 Device Information : IOPS MiB/s Average min max 00:21:11.990 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 167111.07 47849.31 199528.44 00:21:11.990 ======================================================== 00:21:11.990 Total : 25.00 3.12 167111.07 47849.31 199528.44 00:21:11.990 00:21:11.990 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:11.990 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:11.990 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:11.991 rmmod nvme_tcp 00:21:11.991 rmmod nvme_fabrics 00:21:11.991 rmmod nvme_keyring 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3440507 ']' 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3440507 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 3440507 ']' 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 3440507 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3440507 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3440507' 00:21:11.991 killing process with pid 3440507 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 3440507 00:21:11.991 07:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 3440507 00:21:11.991 07:35:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:11.991 07:35:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:11.991 07:35:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:11.991 07:35:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:11.991 07:35:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:11.991 07:35:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:11.991 07:35:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:11.991 07:35:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:11.991 07:35:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:11.991 07:35:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.991 07:35:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.991 07:35:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.536 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:14.536 00:21:14.536 real 0m13.110s 00:21:14.536 user 0m5.354s 00:21:14.536 sys 0m6.342s 00:21:14.536 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:14.536 07:35:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.536 ************************************ 00:21:14.536 END TEST nvmf_wait_for_buf 00:21:14.536 ************************************ 00:21:14.536 07:35:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:14.536 07:35:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:14.536 07:35:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:14.536 07:35:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:14.536 07:35:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:14.536 07:35:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:22.678 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:22.678 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:22.678 Found net devices under 0000:31:00.0: cvl_0_0 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:22.678 Found net devices under 0000:31:00.1: cvl_0_1 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:22.678 ************************************ 00:21:22.678 START TEST nvmf_perf_adq 00:21:22.678 ************************************ 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:22.678 * Looking for test storage... 00:21:22.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:22.678 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:22.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.679 --rc genhtml_branch_coverage=1 00:21:22.679 --rc genhtml_function_coverage=1 00:21:22.679 --rc genhtml_legend=1 00:21:22.679 --rc geninfo_all_blocks=1 00:21:22.679 --rc geninfo_unexecuted_blocks=1 00:21:22.679 00:21:22.679 ' 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:22.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.679 --rc genhtml_branch_coverage=1 00:21:22.679 --rc genhtml_function_coverage=1 00:21:22.679 --rc genhtml_legend=1 00:21:22.679 --rc geninfo_all_blocks=1 00:21:22.679 --rc geninfo_unexecuted_blocks=1 00:21:22.679 00:21:22.679 ' 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:22.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.679 --rc genhtml_branch_coverage=1 00:21:22.679 --rc genhtml_function_coverage=1 00:21:22.679 --rc genhtml_legend=1 00:21:22.679 --rc geninfo_all_blocks=1 00:21:22.679 --rc geninfo_unexecuted_blocks=1 00:21:22.679 00:21:22.679 ' 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:22.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.679 --rc genhtml_branch_coverage=1 00:21:22.679 --rc genhtml_function_coverage=1 00:21:22.679 --rc genhtml_legend=1 00:21:22.679 --rc geninfo_all_blocks=1 00:21:22.679 --rc geninfo_unexecuted_blocks=1 00:21:22.679 00:21:22.679 ' 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.679 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:22.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:22.680 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:22.680 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:22.680 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:22.680 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:22.680 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:22.680 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.282 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:29.282 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:29.282 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:29.282 Found net devices under 0000:31:00.0: cvl_0_0 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:29.282 Found net devices under 0000:31:00.1: cvl_0_1 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:29.282 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:29.283 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:29.283 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:29.283 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:29.283 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:29.283 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:30.789 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:32.703 07:35:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.992 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:37.993 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:37.993 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:37.993 Found net devices under 0000:31:00.0: cvl_0_0 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:37.993 Found net devices under 0000:31:00.1: cvl_0_1 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.993 07:35:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.993 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.993 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.993 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:37.993 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.993 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.254 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:38.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:21:38.255 00:21:38.255 --- 10.0.0.2 ping statistics --- 00:21:38.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.255 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:21:38.255 00:21:38.255 --- 10.0.0.1 ping statistics --- 00:21:38.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.255 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3450814 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3450814 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3450814 ']' 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:38.255 07:35:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.255 [2024-11-20 07:35:56.325815] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:21:38.255 [2024-11-20 07:35:56.325881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.255 [2024-11-20 07:35:56.426456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:38.515 [2024-11-20 07:35:56.481603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.515 [2024-11-20 07:35:56.481658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.515 [2024-11-20 07:35:56.481671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.515 [2024-11-20 07:35:56.481678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.515 [2024-11-20 07:35:56.481684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.515 [2024-11-20 07:35:56.484055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.515 [2024-11-20 07:35:56.484310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.515 [2024-11-20 07:35:56.484469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:38.515 [2024-11-20 07:35:56.484472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.087 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.348 [2024-11-20 07:35:57.358480] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.348 Malloc1 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.348 [2024-11-20 07:35:57.431671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3451170 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:39.348 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:41.261 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:41.261 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.261 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.521 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.521 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:41.521 "tick_rate": 2400000000, 00:21:41.521 "poll_groups": [ 00:21:41.521 { 00:21:41.521 "name": "nvmf_tgt_poll_group_000", 00:21:41.521 "admin_qpairs": 1, 00:21:41.521 "io_qpairs": 1, 00:21:41.521 "current_admin_qpairs": 1, 00:21:41.521 "current_io_qpairs": 1, 00:21:41.521 "pending_bdev_io": 0, 00:21:41.521 "completed_nvme_io": 16801, 00:21:41.521 "transports": [ 00:21:41.521 { 00:21:41.521 "trtype": "TCP" 00:21:41.521 } 00:21:41.521 ] 00:21:41.521 }, 00:21:41.521 { 00:21:41.521 "name": "nvmf_tgt_poll_group_001", 00:21:41.521 "admin_qpairs": 0, 00:21:41.521 "io_qpairs": 1, 00:21:41.521 "current_admin_qpairs": 0, 00:21:41.521 "current_io_qpairs": 1, 00:21:41.521 "pending_bdev_io": 0, 00:21:41.521 "completed_nvme_io": 18819, 00:21:41.521 "transports": [ 00:21:41.521 { 00:21:41.521 "trtype": "TCP" 00:21:41.521 } 00:21:41.521 ] 00:21:41.521 }, 00:21:41.521 { 00:21:41.521 "name": "nvmf_tgt_poll_group_002", 00:21:41.521 "admin_qpairs": 0, 00:21:41.521 "io_qpairs": 1, 00:21:41.521 "current_admin_qpairs": 0, 00:21:41.521 "current_io_qpairs": 1, 00:21:41.521 "pending_bdev_io": 0, 00:21:41.521 "completed_nvme_io": 18699, 00:21:41.521 "transports": [ 00:21:41.521 { 00:21:41.521 "trtype": "TCP" 00:21:41.521 } 00:21:41.521 ] 00:21:41.521 }, 00:21:41.521 { 00:21:41.521 "name": "nvmf_tgt_poll_group_003", 00:21:41.521 "admin_qpairs": 0, 00:21:41.521 "io_qpairs": 1, 00:21:41.521 "current_admin_qpairs": 0, 00:21:41.522 "current_io_qpairs": 1, 00:21:41.522 "pending_bdev_io": 0, 00:21:41.522 "completed_nvme_io": 17058, 00:21:41.522 "transports": [ 00:21:41.522 { 00:21:41.522 "trtype": "TCP" 00:21:41.522 } 00:21:41.522 ] 00:21:41.522 } 00:21:41.522 ] 00:21:41.522 }' 00:21:41.522 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:41.522 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:41.522 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:41.522 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:41.522 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3451170 00:21:49.659 Initializing NVMe Controllers 00:21:49.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:49.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:49.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:49.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:49.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:49.659 Initialization complete. Launching workers. 00:21:49.659 ======================================================== 00:21:49.659 Latency(us) 00:21:49.659 Device Information : IOPS MiB/s Average min max 00:21:49.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12673.50 49.51 5050.18 1210.37 10797.18 00:21:49.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13559.20 52.97 4719.30 1087.99 12715.87 00:21:49.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13374.70 52.24 4785.67 1296.46 13524.65 00:21:49.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12384.90 48.38 5167.62 1203.32 14300.57 00:21:49.659 ======================================================== 00:21:49.659 Total : 51992.29 203.09 4923.82 1087.99 14300.57 00:21:49.659 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:49.659 rmmod nvme_tcp 00:21:49.659 rmmod nvme_fabrics 00:21:49.659 rmmod nvme_keyring 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3450814 ']' 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3450814 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3450814 ']' 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3450814 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3450814 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3450814' 00:21:49.659 killing process with pid 3450814 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3450814 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3450814 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.659 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.257 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:52.257 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:52.257 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:52.257 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:53.639 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:56.185 07:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:01.493 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:01.493 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:01.493 Found net devices under 0000:31:00.0: cvl_0_0 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:01.493 Found net devices under 0000:31:00.1: cvl_0_1 00:22:01.493 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.494 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:01.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:22:01.494 00:22:01.494 --- 10.0.0.2 ping statistics --- 00:22:01.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.494 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:22:01.494 00:22:01.494 --- 10.0.0.1 ping statistics --- 00:22:01.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.494 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:01.494 net.core.busy_poll = 1 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:01.494 net.core.busy_read = 1 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3456225 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3456225 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3456225 ']' 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:01.494 07:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.494 [2024-11-20 07:36:19.630231] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:22:01.494 [2024-11-20 07:36:19.630304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.755 [2024-11-20 07:36:19.732108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:01.755 [2024-11-20 07:36:19.785856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.755 [2024-11-20 07:36:19.785910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.755 [2024-11-20 07:36:19.785918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.755 [2024-11-20 07:36:19.785925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.755 [2024-11-20 07:36:19.785932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.755 [2024-11-20 07:36:19.788030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.755 [2024-11-20 07:36:19.788192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.755 [2024-11-20 07:36:19.788353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.755 [2024-11-20 07:36:19.788353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.327 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:02.327 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:22:02.327 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.327 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:02.327 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.327 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.327 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:02.327 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:02.327 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:02.327 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.327 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.327 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.588 [2024-11-20 07:36:20.646099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.588 Malloc1 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.588 [2024-11-20 07:36:20.720560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3456559 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:02.588 07:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:05.130 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:05.130 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.130 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.130 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.130 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:05.130 "tick_rate": 2400000000, 00:22:05.130 "poll_groups": [ 00:22:05.130 { 00:22:05.130 "name": "nvmf_tgt_poll_group_000", 00:22:05.130 "admin_qpairs": 1, 00:22:05.130 "io_qpairs": 2, 00:22:05.130 "current_admin_qpairs": 1, 00:22:05.130 "current_io_qpairs": 2, 00:22:05.130 "pending_bdev_io": 0, 00:22:05.130 "completed_nvme_io": 25743, 00:22:05.130 "transports": [ 00:22:05.130 { 00:22:05.130 "trtype": "TCP" 00:22:05.130 } 00:22:05.130 ] 00:22:05.130 }, 00:22:05.130 { 00:22:05.130 "name": "nvmf_tgt_poll_group_001", 00:22:05.130 "admin_qpairs": 0, 00:22:05.130 "io_qpairs": 2, 00:22:05.130 "current_admin_qpairs": 0, 00:22:05.130 "current_io_qpairs": 2, 00:22:05.130 "pending_bdev_io": 0, 00:22:05.130 "completed_nvme_io": 27818, 00:22:05.130 "transports": [ 00:22:05.130 { 00:22:05.130 "trtype": "TCP" 00:22:05.130 } 00:22:05.130 ] 00:22:05.130 }, 00:22:05.130 { 00:22:05.130 "name": "nvmf_tgt_poll_group_002", 00:22:05.130 "admin_qpairs": 0, 00:22:05.130 "io_qpairs": 0, 00:22:05.130 "current_admin_qpairs": 0, 00:22:05.130 "current_io_qpairs": 0, 00:22:05.130 "pending_bdev_io": 0, 00:22:05.130 "completed_nvme_io": 0, 00:22:05.130 "transports": [ 00:22:05.130 { 00:22:05.130 "trtype": "TCP" 00:22:05.130 } 00:22:05.130 ] 00:22:05.130 }, 00:22:05.130 { 00:22:05.130 "name": "nvmf_tgt_poll_group_003", 00:22:05.130 "admin_qpairs": 0, 00:22:05.130 "io_qpairs": 0, 00:22:05.130 "current_admin_qpairs": 0, 00:22:05.130 "current_io_qpairs": 0, 00:22:05.130 "pending_bdev_io": 0, 00:22:05.130 "completed_nvme_io": 0, 00:22:05.130 "transports": [ 00:22:05.130 { 00:22:05.130 "trtype": "TCP" 00:22:05.130 } 00:22:05.130 ] 00:22:05.130 } 00:22:05.130 ] 00:22:05.130 }' 00:22:05.130 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:05.130 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:05.130 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:05.130 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:05.130 07:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3456559 00:22:13.261 Initializing NVMe Controllers 00:22:13.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:13.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:13.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:13.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:13.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:13.261 Initialization complete. Launching workers. 00:22:13.261 ======================================================== 00:22:13.262 Latency(us) 00:22:13.262 Device Information : IOPS MiB/s Average min max 00:22:13.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9352.72 36.53 6843.86 1225.41 52532.82 00:22:13.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9275.32 36.23 6899.76 988.10 52231.58 00:22:13.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9178.22 35.85 6973.42 1332.64 53921.07 00:22:13.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9297.92 36.32 6896.24 1175.38 52348.30 00:22:13.262 ======================================================== 00:22:13.262 Total : 37104.19 144.94 6903.01 988.10 53921.07 00:22:13.262 00:22:13.262 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:13.262 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:13.262 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:13.262 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:13.262 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:13.262 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:13.262 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:13.262 rmmod nvme_tcp 00:22:13.262 rmmod nvme_fabrics 00:22:13.262 rmmod nvme_keyring 00:22:13.262 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:13.262 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:13.262 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:13.262 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3456225 ']' 00:22:13.262 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3456225 00:22:13.262 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3456225 ']' 00:22:13.262 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3456225 00:22:13.262 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:13.262 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:13.262 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3456225 00:22:13.262 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:13.262 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:13.262 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3456225' 00:22:13.262 killing process with pid 3456225 00:22:13.262 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3456225 00:22:13.262 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3456225 00:22:13.262 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:13.262 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:13.262 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:13.262 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:13.262 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:13.262 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:13.262 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:13.262 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:13.262 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:13.262 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.262 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.262 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:16.564 00:22:16.564 real 0m54.741s 00:22:16.564 user 2m49.925s 00:22:16.564 sys 0m11.602s 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.564 ************************************ 00:22:16.564 END TEST nvmf_perf_adq 00:22:16.564 ************************************ 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:16.564 ************************************ 00:22:16.564 START TEST nvmf_shutdown 00:22:16.564 ************************************ 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:16.564 * Looking for test storage... 00:22:16.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:16.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.564 --rc genhtml_branch_coverage=1 00:22:16.564 --rc genhtml_function_coverage=1 00:22:16.564 --rc genhtml_legend=1 00:22:16.564 --rc geninfo_all_blocks=1 00:22:16.564 --rc geninfo_unexecuted_blocks=1 00:22:16.564 00:22:16.564 ' 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:16.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.564 --rc genhtml_branch_coverage=1 00:22:16.564 --rc genhtml_function_coverage=1 00:22:16.564 --rc genhtml_legend=1 00:22:16.564 --rc geninfo_all_blocks=1 00:22:16.564 --rc geninfo_unexecuted_blocks=1 00:22:16.564 00:22:16.564 ' 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:16.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.564 --rc genhtml_branch_coverage=1 00:22:16.564 --rc genhtml_function_coverage=1 00:22:16.564 --rc genhtml_legend=1 00:22:16.564 --rc geninfo_all_blocks=1 00:22:16.564 --rc geninfo_unexecuted_blocks=1 00:22:16.564 00:22:16.564 ' 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:16.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.564 --rc genhtml_branch_coverage=1 00:22:16.564 --rc genhtml_function_coverage=1 00:22:16.564 --rc genhtml_legend=1 00:22:16.564 --rc geninfo_all_blocks=1 00:22:16.564 --rc geninfo_unexecuted_blocks=1 00:22:16.564 00:22:16.564 ' 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.564 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:16.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:16.565 ************************************ 00:22:16.565 START TEST nvmf_shutdown_tc1 00:22:16.565 ************************************ 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:16.565 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:24.702 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:24.702 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:24.702 Found net devices under 0000:31:00.0: cvl_0_0 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:24.702 Found net devices under 0000:31:00.1: cvl_0_1 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.702 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:24.703 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:24.703 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.703 07:36:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:24.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:22:24.703 00:22:24.703 --- 10.0.0.2 ping statistics --- 00:22:24.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.703 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:22:24.703 00:22:24.703 --- 10.0.0.1 ping statistics --- 00:22:24.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.703 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3463058 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3463058 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3463058 ']' 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:24.703 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:24.703 [2024-11-20 07:36:42.398270] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:22:24.703 [2024-11-20 07:36:42.398354] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.703 [2024-11-20 07:36:42.502886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:24.703 [2024-11-20 07:36:42.555275] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.703 [2024-11-20 07:36:42.555332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.703 [2024-11-20 07:36:42.555341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.703 [2024-11-20 07:36:42.555348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.703 [2024-11-20 07:36:42.555354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.703 [2024-11-20 07:36:42.557413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.703 [2024-11-20 07:36:42.557573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:24.703 [2024-11-20 07:36:42.557735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:24.703 [2024-11-20 07:36:42.557736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.275 [2024-11-20 07:36:43.271929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.275 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.275 Malloc1 00:22:25.275 [2024-11-20 07:36:43.403965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.275 Malloc2 00:22:25.275 Malloc3 00:22:25.571 Malloc4 00:22:25.571 Malloc5 00:22:25.571 Malloc6 00:22:25.571 Malloc7 00:22:25.571 Malloc8 00:22:25.571 Malloc9 00:22:25.832 Malloc10 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3463438 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3463438 /var/tmp/bdevperf.sock 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3463438 ']' 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.832 { 00:22:25.832 "params": { 00:22:25.832 "name": "Nvme$subsystem", 00:22:25.832 "trtype": "$TEST_TRANSPORT", 00:22:25.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.832 "adrfam": "ipv4", 00:22:25.832 "trsvcid": "$NVMF_PORT", 00:22:25.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.832 "hdgst": ${hdgst:-false}, 00:22:25.832 "ddgst": ${ddgst:-false} 00:22:25.832 }, 00:22:25.832 "method": "bdev_nvme_attach_controller" 00:22:25.832 } 00:22:25.832 EOF 00:22:25.832 )") 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.832 { 00:22:25.832 "params": { 00:22:25.832 "name": "Nvme$subsystem", 00:22:25.832 "trtype": "$TEST_TRANSPORT", 00:22:25.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.832 "adrfam": "ipv4", 00:22:25.832 "trsvcid": "$NVMF_PORT", 00:22:25.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.832 "hdgst": ${hdgst:-false}, 00:22:25.832 "ddgst": ${ddgst:-false} 00:22:25.832 }, 00:22:25.832 "method": "bdev_nvme_attach_controller" 00:22:25.832 } 00:22:25.832 EOF 00:22:25.832 )") 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.832 { 00:22:25.832 "params": { 00:22:25.832 "name": "Nvme$subsystem", 00:22:25.832 "trtype": "$TEST_TRANSPORT", 00:22:25.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.832 "adrfam": "ipv4", 00:22:25.832 "trsvcid": "$NVMF_PORT", 00:22:25.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.832 "hdgst": ${hdgst:-false}, 00:22:25.832 "ddgst": ${ddgst:-false} 00:22:25.832 }, 00:22:25.832 "method": "bdev_nvme_attach_controller" 00:22:25.832 } 00:22:25.832 EOF 00:22:25.832 )") 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.832 { 00:22:25.832 "params": { 00:22:25.832 "name": "Nvme$subsystem", 00:22:25.832 "trtype": "$TEST_TRANSPORT", 00:22:25.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.832 "adrfam": "ipv4", 00:22:25.832 "trsvcid": "$NVMF_PORT", 00:22:25.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.832 "hdgst": ${hdgst:-false}, 00:22:25.832 "ddgst": ${ddgst:-false} 00:22:25.832 }, 00:22:25.832 "method": "bdev_nvme_attach_controller" 00:22:25.832 } 00:22:25.832 EOF 00:22:25.832 )") 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.832 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.833 { 00:22:25.833 "params": { 00:22:25.833 "name": "Nvme$subsystem", 00:22:25.833 "trtype": "$TEST_TRANSPORT", 00:22:25.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.833 "adrfam": "ipv4", 00:22:25.833 "trsvcid": "$NVMF_PORT", 00:22:25.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.833 "hdgst": ${hdgst:-false}, 00:22:25.833 "ddgst": ${ddgst:-false} 00:22:25.833 }, 00:22:25.833 "method": "bdev_nvme_attach_controller" 00:22:25.833 } 00:22:25.833 EOF 00:22:25.833 )") 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.833 { 00:22:25.833 "params": { 00:22:25.833 "name": "Nvme$subsystem", 00:22:25.833 "trtype": "$TEST_TRANSPORT", 00:22:25.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.833 "adrfam": "ipv4", 00:22:25.833 "trsvcid": "$NVMF_PORT", 00:22:25.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.833 "hdgst": ${hdgst:-false}, 00:22:25.833 "ddgst": ${ddgst:-false} 00:22:25.833 }, 00:22:25.833 "method": "bdev_nvme_attach_controller" 00:22:25.833 } 00:22:25.833 EOF 00:22:25.833 )") 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.833 [2024-11-20 07:36:43.925023] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:22:25.833 [2024-11-20 07:36:43.925095] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.833 { 00:22:25.833 "params": { 00:22:25.833 "name": "Nvme$subsystem", 00:22:25.833 "trtype": "$TEST_TRANSPORT", 00:22:25.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.833 "adrfam": "ipv4", 00:22:25.833 "trsvcid": "$NVMF_PORT", 00:22:25.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.833 "hdgst": ${hdgst:-false}, 00:22:25.833 "ddgst": ${ddgst:-false} 00:22:25.833 }, 00:22:25.833 "method": "bdev_nvme_attach_controller" 00:22:25.833 } 00:22:25.833 EOF 00:22:25.833 )") 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.833 { 00:22:25.833 "params": { 00:22:25.833 "name": "Nvme$subsystem", 00:22:25.833 "trtype": "$TEST_TRANSPORT", 00:22:25.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.833 "adrfam": "ipv4", 00:22:25.833 "trsvcid": "$NVMF_PORT", 00:22:25.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.833 "hdgst": ${hdgst:-false}, 00:22:25.833 "ddgst": ${ddgst:-false} 00:22:25.833 }, 00:22:25.833 "method": "bdev_nvme_attach_controller" 00:22:25.833 } 00:22:25.833 EOF 00:22:25.833 )") 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.833 { 00:22:25.833 "params": { 00:22:25.833 "name": "Nvme$subsystem", 00:22:25.833 "trtype": "$TEST_TRANSPORT", 00:22:25.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.833 "adrfam": "ipv4", 00:22:25.833 "trsvcid": "$NVMF_PORT", 00:22:25.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.833 "hdgst": ${hdgst:-false}, 00:22:25.833 "ddgst": ${ddgst:-false} 00:22:25.833 }, 00:22:25.833 "method": "bdev_nvme_attach_controller" 00:22:25.833 } 00:22:25.833 EOF 00:22:25.833 )") 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.833 { 00:22:25.833 "params": { 00:22:25.833 "name": "Nvme$subsystem", 00:22:25.833 "trtype": "$TEST_TRANSPORT", 00:22:25.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.833 "adrfam": "ipv4", 00:22:25.833 "trsvcid": "$NVMF_PORT", 00:22:25.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.833 "hdgst": ${hdgst:-false}, 00:22:25.833 "ddgst": ${ddgst:-false} 00:22:25.833 }, 00:22:25.833 "method": "bdev_nvme_attach_controller" 00:22:25.833 } 00:22:25.833 EOF 00:22:25.833 )") 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:25.833 07:36:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:25.833 "params": { 00:22:25.833 "name": "Nvme1", 00:22:25.833 "trtype": "tcp", 00:22:25.833 "traddr": "10.0.0.2", 00:22:25.833 "adrfam": "ipv4", 00:22:25.833 "trsvcid": "4420", 00:22:25.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:25.833 "hdgst": false, 00:22:25.833 "ddgst": false 00:22:25.833 }, 00:22:25.833 "method": "bdev_nvme_attach_controller" 00:22:25.833 },{ 00:22:25.833 "params": { 00:22:25.833 "name": "Nvme2", 00:22:25.833 "trtype": "tcp", 00:22:25.833 "traddr": "10.0.0.2", 00:22:25.833 "adrfam": "ipv4", 00:22:25.833 "trsvcid": "4420", 00:22:25.833 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:25.833 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:25.833 "hdgst": false, 00:22:25.833 "ddgst": false 00:22:25.833 }, 00:22:25.833 "method": "bdev_nvme_attach_controller" 00:22:25.833 },{ 00:22:25.833 "params": { 00:22:25.833 "name": "Nvme3", 00:22:25.833 "trtype": "tcp", 00:22:25.833 "traddr": "10.0.0.2", 00:22:25.833 "adrfam": "ipv4", 00:22:25.833 "trsvcid": "4420", 00:22:25.833 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:25.833 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:25.833 "hdgst": false, 00:22:25.833 "ddgst": false 00:22:25.833 }, 00:22:25.833 "method": "bdev_nvme_attach_controller" 00:22:25.833 },{ 00:22:25.833 "params": { 00:22:25.833 "name": "Nvme4", 00:22:25.833 "trtype": "tcp", 00:22:25.833 "traddr": "10.0.0.2", 00:22:25.833 "adrfam": "ipv4", 00:22:25.833 "trsvcid": "4420", 00:22:25.833 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:25.833 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:25.833 "hdgst": false, 00:22:25.833 "ddgst": false 00:22:25.833 }, 00:22:25.833 "method": "bdev_nvme_attach_controller" 00:22:25.833 },{ 00:22:25.833 "params": { 00:22:25.833 "name": "Nvme5", 00:22:25.833 "trtype": "tcp", 00:22:25.833 "traddr": "10.0.0.2", 00:22:25.833 "adrfam": "ipv4", 00:22:25.833 "trsvcid": "4420", 00:22:25.833 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:25.833 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:25.833 "hdgst": false, 00:22:25.833 "ddgst": false 00:22:25.833 }, 00:22:25.833 "method": "bdev_nvme_attach_controller" 00:22:25.833 },{ 00:22:25.833 "params": { 00:22:25.833 "name": "Nvme6", 00:22:25.833 "trtype": "tcp", 00:22:25.833 "traddr": "10.0.0.2", 00:22:25.833 "adrfam": "ipv4", 00:22:25.833 "trsvcid": "4420", 00:22:25.833 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:25.833 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:25.833 "hdgst": false, 00:22:25.833 "ddgst": false 00:22:25.833 }, 00:22:25.833 "method": "bdev_nvme_attach_controller" 00:22:25.833 },{ 00:22:25.833 "params": { 00:22:25.833 "name": "Nvme7", 00:22:25.833 "trtype": "tcp", 00:22:25.833 "traddr": "10.0.0.2", 00:22:25.833 "adrfam": "ipv4", 00:22:25.833 "trsvcid": "4420", 00:22:25.833 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:25.833 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:25.833 "hdgst": false, 00:22:25.833 "ddgst": false 00:22:25.833 }, 00:22:25.833 "method": "bdev_nvme_attach_controller" 00:22:25.833 },{ 00:22:25.833 "params": { 00:22:25.833 "name": "Nvme8", 00:22:25.833 "trtype": "tcp", 00:22:25.833 "traddr": "10.0.0.2", 00:22:25.833 "adrfam": "ipv4", 00:22:25.833 "trsvcid": "4420", 00:22:25.833 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:25.833 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:25.833 "hdgst": false, 00:22:25.833 "ddgst": false 00:22:25.833 }, 00:22:25.833 "method": "bdev_nvme_attach_controller" 00:22:25.833 },{ 00:22:25.833 "params": { 00:22:25.833 "name": "Nvme9", 00:22:25.833 "trtype": "tcp", 00:22:25.833 "traddr": "10.0.0.2", 00:22:25.833 "adrfam": "ipv4", 00:22:25.833 "trsvcid": "4420", 00:22:25.833 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:25.833 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:25.833 "hdgst": false, 00:22:25.833 "ddgst": false 00:22:25.833 }, 00:22:25.833 "method": "bdev_nvme_attach_controller" 00:22:25.833 },{ 00:22:25.833 "params": { 00:22:25.833 "name": "Nvme10", 00:22:25.833 "trtype": "tcp", 00:22:25.833 "traddr": "10.0.0.2", 00:22:25.833 "adrfam": "ipv4", 00:22:25.833 "trsvcid": "4420", 00:22:25.833 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:25.833 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:25.833 "hdgst": false, 00:22:25.833 "ddgst": false 00:22:25.833 }, 00:22:25.833 "method": "bdev_nvme_attach_controller" 00:22:25.833 }' 00:22:25.833 [2024-11-20 07:36:44.022743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.093 [2024-11-20 07:36:44.079677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.475 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:27.475 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:27.475 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:27.475 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.475 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:27.475 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.475 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3463438 00:22:27.475 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:27.476 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:28.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3463438 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:28.416 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3463058 00:22:28.416 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:28.416 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:28.416 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:28.416 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:28.416 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.417 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.417 { 00:22:28.417 "params": { 00:22:28.417 "name": "Nvme$subsystem", 00:22:28.417 "trtype": "$TEST_TRANSPORT", 00:22:28.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.417 "adrfam": "ipv4", 00:22:28.417 "trsvcid": "$NVMF_PORT", 00:22:28.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.417 "hdgst": ${hdgst:-false}, 00:22:28.417 "ddgst": ${ddgst:-false} 00:22:28.417 }, 00:22:28.417 "method": "bdev_nvme_attach_controller" 00:22:28.417 } 00:22:28.417 EOF 00:22:28.417 )") 00:22:28.417 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.417 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.417 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.417 { 00:22:28.417 "params": { 00:22:28.417 "name": "Nvme$subsystem", 00:22:28.417 "trtype": "$TEST_TRANSPORT", 00:22:28.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.417 "adrfam": "ipv4", 00:22:28.417 "trsvcid": "$NVMF_PORT", 00:22:28.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.417 "hdgst": ${hdgst:-false}, 00:22:28.417 "ddgst": ${ddgst:-false} 00:22:28.417 }, 00:22:28.417 "method": "bdev_nvme_attach_controller" 00:22:28.417 } 00:22:28.417 EOF 00:22:28.417 )") 00:22:28.417 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.417 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.417 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.417 { 00:22:28.417 "params": { 00:22:28.417 "name": "Nvme$subsystem", 00:22:28.417 "trtype": "$TEST_TRANSPORT", 00:22:28.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.417 "adrfam": "ipv4", 00:22:28.417 "trsvcid": "$NVMF_PORT", 00:22:28.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.417 "hdgst": ${hdgst:-false}, 00:22:28.417 "ddgst": ${ddgst:-false} 00:22:28.417 }, 00:22:28.417 "method": "bdev_nvme_attach_controller" 00:22:28.417 } 00:22:28.417 EOF 00:22:28.417 )") 00:22:28.417 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.417 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.417 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.417 { 00:22:28.417 "params": { 00:22:28.417 "name": "Nvme$subsystem", 00:22:28.417 "trtype": "$TEST_TRANSPORT", 00:22:28.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.417 "adrfam": "ipv4", 00:22:28.417 "trsvcid": "$NVMF_PORT", 00:22:28.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.417 "hdgst": ${hdgst:-false}, 00:22:28.417 "ddgst": ${ddgst:-false} 00:22:28.417 }, 00:22:28.417 "method": "bdev_nvme_attach_controller" 00:22:28.417 } 00:22:28.417 EOF 00:22:28.417 )") 00:22:28.417 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.417 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.417 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.417 { 00:22:28.417 "params": { 00:22:28.417 "name": "Nvme$subsystem", 00:22:28.417 "trtype": "$TEST_TRANSPORT", 00:22:28.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.417 "adrfam": "ipv4", 00:22:28.417 "trsvcid": "$NVMF_PORT", 00:22:28.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.417 "hdgst": ${hdgst:-false}, 00:22:28.417 "ddgst": ${ddgst:-false} 00:22:28.417 }, 00:22:28.417 "method": "bdev_nvme_attach_controller" 00:22:28.417 } 00:22:28.417 EOF 00:22:28.417 )") 00:22:28.678 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.678 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.678 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.678 { 00:22:28.678 "params": { 00:22:28.678 "name": "Nvme$subsystem", 00:22:28.678 "trtype": "$TEST_TRANSPORT", 00:22:28.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.678 "adrfam": "ipv4", 00:22:28.678 "trsvcid": "$NVMF_PORT", 00:22:28.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.678 "hdgst": ${hdgst:-false}, 00:22:28.678 "ddgst": ${ddgst:-false} 00:22:28.678 }, 00:22:28.678 "method": "bdev_nvme_attach_controller" 00:22:28.678 } 00:22:28.678 EOF 00:22:28.678 )") 00:22:28.678 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.678 [2024-11-20 07:36:46.635951] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:22:28.678 [2024-11-20 07:36:46.636010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3463940 ] 00:22:28.678 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.678 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.678 { 00:22:28.678 "params": { 00:22:28.678 "name": "Nvme$subsystem", 00:22:28.678 "trtype": "$TEST_TRANSPORT", 00:22:28.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.678 "adrfam": "ipv4", 00:22:28.678 "trsvcid": "$NVMF_PORT", 00:22:28.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.678 "hdgst": ${hdgst:-false}, 00:22:28.678 "ddgst": ${ddgst:-false} 00:22:28.678 }, 00:22:28.678 "method": "bdev_nvme_attach_controller" 00:22:28.678 } 00:22:28.678 EOF 00:22:28.678 )") 00:22:28.678 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.678 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.678 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.678 { 00:22:28.678 "params": { 00:22:28.678 "name": "Nvme$subsystem", 00:22:28.678 "trtype": "$TEST_TRANSPORT", 00:22:28.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.679 "adrfam": "ipv4", 00:22:28.679 "trsvcid": "$NVMF_PORT", 00:22:28.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.679 "hdgst": ${hdgst:-false}, 00:22:28.679 "ddgst": ${ddgst:-false} 00:22:28.679 }, 00:22:28.679 "method": "bdev_nvme_attach_controller" 00:22:28.679 } 00:22:28.679 EOF 00:22:28.679 )") 00:22:28.679 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.679 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.679 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.679 { 00:22:28.679 "params": { 00:22:28.679 "name": "Nvme$subsystem", 00:22:28.679 "trtype": "$TEST_TRANSPORT", 00:22:28.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.679 "adrfam": "ipv4", 00:22:28.679 "trsvcid": "$NVMF_PORT", 00:22:28.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.679 "hdgst": ${hdgst:-false}, 00:22:28.679 "ddgst": ${ddgst:-false} 00:22:28.679 }, 00:22:28.679 "method": "bdev_nvme_attach_controller" 00:22:28.679 } 00:22:28.679 EOF 00:22:28.679 )") 00:22:28.679 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.679 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:28.679 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:28.679 { 00:22:28.679 "params": { 00:22:28.679 "name": "Nvme$subsystem", 00:22:28.679 "trtype": "$TEST_TRANSPORT", 00:22:28.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.679 "adrfam": "ipv4", 00:22:28.679 "trsvcid": "$NVMF_PORT", 00:22:28.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.679 "hdgst": ${hdgst:-false}, 00:22:28.679 "ddgst": ${ddgst:-false} 00:22:28.679 }, 00:22:28.679 "method": "bdev_nvme_attach_controller" 00:22:28.679 } 00:22:28.679 EOF 00:22:28.679 )") 00:22:28.679 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:28.679 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:28.679 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:28.679 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:28.679 "params": { 00:22:28.679 "name": "Nvme1", 00:22:28.679 "trtype": "tcp", 00:22:28.679 "traddr": "10.0.0.2", 00:22:28.679 "adrfam": "ipv4", 00:22:28.679 "trsvcid": "4420", 00:22:28.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:28.679 "hdgst": false, 00:22:28.679 "ddgst": false 00:22:28.679 }, 00:22:28.679 "method": "bdev_nvme_attach_controller" 00:22:28.679 },{ 00:22:28.679 "params": { 00:22:28.679 "name": "Nvme2", 00:22:28.679 "trtype": "tcp", 00:22:28.679 "traddr": "10.0.0.2", 00:22:28.679 "adrfam": "ipv4", 00:22:28.679 "trsvcid": "4420", 00:22:28.679 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:28.679 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:28.679 "hdgst": false, 00:22:28.679 "ddgst": false 00:22:28.679 }, 00:22:28.679 "method": "bdev_nvme_attach_controller" 00:22:28.679 },{ 00:22:28.679 "params": { 00:22:28.679 "name": "Nvme3", 00:22:28.679 "trtype": "tcp", 00:22:28.679 "traddr": "10.0.0.2", 00:22:28.679 "adrfam": "ipv4", 00:22:28.679 "trsvcid": "4420", 00:22:28.679 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:28.679 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:28.679 "hdgst": false, 00:22:28.679 "ddgst": false 00:22:28.679 }, 00:22:28.679 "method": "bdev_nvme_attach_controller" 00:22:28.679 },{ 00:22:28.679 "params": { 00:22:28.679 "name": "Nvme4", 00:22:28.679 "trtype": "tcp", 00:22:28.679 "traddr": "10.0.0.2", 00:22:28.679 "adrfam": "ipv4", 00:22:28.679 "trsvcid": "4420", 00:22:28.679 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:28.679 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:28.679 "hdgst": false, 00:22:28.679 "ddgst": false 00:22:28.679 }, 00:22:28.679 "method": "bdev_nvme_attach_controller" 00:22:28.679 },{ 00:22:28.679 "params": { 00:22:28.679 "name": "Nvme5", 00:22:28.679 "trtype": "tcp", 00:22:28.679 "traddr": "10.0.0.2", 00:22:28.679 "adrfam": "ipv4", 00:22:28.679 "trsvcid": "4420", 00:22:28.679 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:28.679 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:28.679 "hdgst": false, 00:22:28.679 "ddgst": false 00:22:28.679 }, 00:22:28.679 "method": "bdev_nvme_attach_controller" 00:22:28.679 },{ 00:22:28.679 "params": { 00:22:28.679 "name": "Nvme6", 00:22:28.679 "trtype": "tcp", 00:22:28.679 "traddr": "10.0.0.2", 00:22:28.679 "adrfam": "ipv4", 00:22:28.679 "trsvcid": "4420", 00:22:28.679 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:28.679 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:28.679 "hdgst": false, 00:22:28.679 "ddgst": false 00:22:28.679 }, 00:22:28.679 "method": "bdev_nvme_attach_controller" 00:22:28.679 },{ 00:22:28.679 "params": { 00:22:28.679 "name": "Nvme7", 00:22:28.679 "trtype": "tcp", 00:22:28.679 "traddr": "10.0.0.2", 00:22:28.679 "adrfam": "ipv4", 00:22:28.679 "trsvcid": "4420", 00:22:28.679 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:28.679 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:28.679 "hdgst": false, 00:22:28.679 "ddgst": false 00:22:28.679 }, 00:22:28.679 "method": "bdev_nvme_attach_controller" 00:22:28.679 },{ 00:22:28.679 "params": { 00:22:28.679 "name": "Nvme8", 00:22:28.679 "trtype": "tcp", 00:22:28.679 "traddr": "10.0.0.2", 00:22:28.679 "adrfam": "ipv4", 00:22:28.679 "trsvcid": "4420", 00:22:28.679 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:28.679 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:28.679 "hdgst": false, 00:22:28.679 "ddgst": false 00:22:28.679 }, 00:22:28.679 "method": "bdev_nvme_attach_controller" 00:22:28.679 },{ 00:22:28.679 "params": { 00:22:28.679 "name": "Nvme9", 00:22:28.679 "trtype": "tcp", 00:22:28.679 "traddr": "10.0.0.2", 00:22:28.679 "adrfam": "ipv4", 00:22:28.679 "trsvcid": "4420", 00:22:28.679 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:28.679 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:28.679 "hdgst": false, 00:22:28.679 "ddgst": false 00:22:28.679 }, 00:22:28.680 "method": "bdev_nvme_attach_controller" 00:22:28.680 },{ 00:22:28.680 "params": { 00:22:28.680 "name": "Nvme10", 00:22:28.680 "trtype": "tcp", 00:22:28.680 "traddr": "10.0.0.2", 00:22:28.680 "adrfam": "ipv4", 00:22:28.680 "trsvcid": "4420", 00:22:28.680 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:28.680 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:28.680 "hdgst": false, 00:22:28.680 "ddgst": false 00:22:28.680 }, 00:22:28.680 "method": "bdev_nvme_attach_controller" 00:22:28.680 }' 00:22:28.680 [2024-11-20 07:36:46.727071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.680 [2024-11-20 07:36:46.763329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.064 Running I/O for 1 seconds... 00:22:31.266 1804.00 IOPS, 112.75 MiB/s 00:22:31.266 Latency(us) 00:22:31.266 [2024-11-20T06:36:49.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.266 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.266 Verification LBA range: start 0x0 length 0x400 00:22:31.266 Nvme1n1 : 1.13 226.40 14.15 0.00 0.00 279862.83 22063.79 256901.12 00:22:31.266 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.266 Verification LBA range: start 0x0 length 0x400 00:22:31.266 Nvme2n1 : 1.14 223.65 13.98 0.00 0.00 277668.05 15947.09 260396.37 00:22:31.266 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.266 Verification LBA range: start 0x0 length 0x400 00:22:31.266 Nvme3n1 : 1.14 224.72 14.05 0.00 0.00 272380.37 15510.19 253405.87 00:22:31.266 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.266 Verification LBA range: start 0x0 length 0x400 00:22:31.266 Nvme4n1 : 1.07 239.50 14.97 0.00 0.00 250269.23 18022.40 258648.75 00:22:31.266 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.266 Verification LBA range: start 0x0 length 0x400 00:22:31.266 Nvme5n1 : 1.14 225.37 14.09 0.00 0.00 262249.17 19114.67 234181.97 00:22:31.266 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.266 Verification LBA range: start 0x0 length 0x400 00:22:31.266 Nvme6n1 : 1.12 227.68 14.23 0.00 0.00 254472.96 16056.32 246415.36 00:22:31.266 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.266 Verification LBA range: start 0x0 length 0x400 00:22:31.266 Nvme7n1 : 1.20 267.26 16.70 0.00 0.00 213420.20 7154.35 251658.24 00:22:31.266 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.266 Verification LBA range: start 0x0 length 0x400 00:22:31.266 Nvme8n1 : 1.19 271.46 16.97 0.00 0.00 205236.40 20316.16 248162.99 00:22:31.266 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.266 Verification LBA range: start 0x0 length 0x400 00:22:31.266 Nvme9n1 : 1.21 263.92 16.50 0.00 0.00 209926.74 7700.48 267386.88 00:22:31.266 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.266 Verification LBA range: start 0x0 length 0x400 00:22:31.266 Nvme10n1 : 1.22 263.36 16.46 0.00 0.00 206549.38 6799.36 276125.01 00:22:31.266 [2024-11-20T06:36:49.476Z] =================================================================================================================== 00:22:31.266 [2024-11-20T06:36:49.476Z] Total : 2433.32 152.08 0.00 0.00 240024.99 6799.36 276125.01 00:22:31.266 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:31.266 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:31.266 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:31.266 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:31.266 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:31.266 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:31.266 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:31.266 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:31.266 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:31.266 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:31.266 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:31.266 rmmod nvme_tcp 00:22:31.266 rmmod nvme_fabrics 00:22:31.266 rmmod nvme_keyring 00:22:31.266 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:31.526 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:31.526 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:31.526 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3463058 ']' 00:22:31.526 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3463058 00:22:31.526 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 3463058 ']' 00:22:31.526 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 3463058 00:22:31.526 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:22:31.527 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:31.527 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3463058 00:22:31.527 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:31.527 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:31.527 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3463058' 00:22:31.527 killing process with pid 3463058 00:22:31.527 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 3463058 00:22:31.527 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 3463058 00:22:31.788 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:31.788 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:31.788 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:31.788 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:31.788 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:31.789 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:31.789 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:31.789 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:31.789 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:31.789 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.789 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.789 07:36:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.702 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:33.702 00:22:33.702 real 0m17.222s 00:22:33.702 user 0m35.024s 00:22:33.702 sys 0m7.096s 00:22:33.702 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:33.702 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.702 ************************************ 00:22:33.702 END TEST nvmf_shutdown_tc1 00:22:33.702 ************************************ 00:22:33.702 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:33.702 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:33.702 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:33.702 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:33.963 ************************************ 00:22:33.963 START TEST nvmf_shutdown_tc2 00:22:33.963 ************************************ 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:33.964 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:33.964 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:33.964 Found net devices under 0000:31:00.0: cvl_0_0 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.964 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:33.965 Found net devices under 0000:31:00.1: cvl_0_1 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.965 07:36:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.965 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.965 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.965 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:33.965 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:34.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:22:34.226 00:22:34.226 --- 10.0.0.2 ping statistics --- 00:22:34.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.226 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:34.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:22:34.226 00:22:34.226 --- 10.0.0.1 ping statistics --- 00:22:34.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.226 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3465215 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3465215 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3465215 ']' 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:34.226 07:36:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:34.226 [2024-11-20 07:36:52.409598] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:22:34.226 [2024-11-20 07:36:52.409665] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.486 [2024-11-20 07:36:52.506235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.486 [2024-11-20 07:36:52.540201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.486 [2024-11-20 07:36:52.540232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.486 [2024-11-20 07:36:52.540241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.486 [2024-11-20 07:36:52.540246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.486 [2024-11-20 07:36:52.540250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.486 [2024-11-20 07:36:52.541593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.486 [2024-11-20 07:36:52.541763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:34.486 [2024-11-20 07:36:52.541898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:34.486 [2024-11-20 07:36:52.541990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.057 [2024-11-20 07:36:53.233423] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.057 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.318 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.318 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.318 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.318 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.318 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.318 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.319 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.319 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.319 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.319 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.319 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.319 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.319 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.319 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.319 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:35.319 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.319 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.319 Malloc1 00:22:35.319 [2024-11-20 07:36:53.347842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.319 Malloc2 00:22:35.319 Malloc3 00:22:35.319 Malloc4 00:22:35.319 Malloc5 00:22:35.319 Malloc6 00:22:35.579 Malloc7 00:22:35.579 Malloc8 00:22:35.579 Malloc9 00:22:35.579 Malloc10 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3465443 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3465443 /var/tmp/bdevperf.sock 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3465443 ']' 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.580 { 00:22:35.580 "params": { 00:22:35.580 "name": "Nvme$subsystem", 00:22:35.580 "trtype": "$TEST_TRANSPORT", 00:22:35.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.580 "adrfam": "ipv4", 00:22:35.580 "trsvcid": "$NVMF_PORT", 00:22:35.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.580 "hdgst": ${hdgst:-false}, 00:22:35.580 "ddgst": ${ddgst:-false} 00:22:35.580 }, 00:22:35.580 "method": "bdev_nvme_attach_controller" 00:22:35.580 } 00:22:35.580 EOF 00:22:35.580 )") 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.580 { 00:22:35.580 "params": { 00:22:35.580 "name": "Nvme$subsystem", 00:22:35.580 "trtype": "$TEST_TRANSPORT", 00:22:35.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.580 "adrfam": "ipv4", 00:22:35.580 "trsvcid": "$NVMF_PORT", 00:22:35.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.580 "hdgst": ${hdgst:-false}, 00:22:35.580 "ddgst": ${ddgst:-false} 00:22:35.580 }, 00:22:35.580 "method": "bdev_nvme_attach_controller" 00:22:35.580 } 00:22:35.580 EOF 00:22:35.580 )") 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.580 { 00:22:35.580 "params": { 00:22:35.580 "name": "Nvme$subsystem", 00:22:35.580 "trtype": "$TEST_TRANSPORT", 00:22:35.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.580 "adrfam": "ipv4", 00:22:35.580 "trsvcid": "$NVMF_PORT", 00:22:35.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.580 "hdgst": ${hdgst:-false}, 00:22:35.580 "ddgst": ${ddgst:-false} 00:22:35.580 }, 00:22:35.580 "method": "bdev_nvme_attach_controller" 00:22:35.580 } 00:22:35.580 EOF 00:22:35.580 )") 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.580 { 00:22:35.580 "params": { 00:22:35.580 "name": "Nvme$subsystem", 00:22:35.580 "trtype": "$TEST_TRANSPORT", 00:22:35.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.580 "adrfam": "ipv4", 00:22:35.580 "trsvcid": "$NVMF_PORT", 00:22:35.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.580 "hdgst": ${hdgst:-false}, 00:22:35.580 "ddgst": ${ddgst:-false} 00:22:35.580 }, 00:22:35.580 "method": "bdev_nvme_attach_controller" 00:22:35.580 } 00:22:35.580 EOF 00:22:35.580 )") 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.580 { 00:22:35.580 "params": { 00:22:35.580 "name": "Nvme$subsystem", 00:22:35.580 "trtype": "$TEST_TRANSPORT", 00:22:35.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.580 "adrfam": "ipv4", 00:22:35.580 "trsvcid": "$NVMF_PORT", 00:22:35.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.580 "hdgst": ${hdgst:-false}, 00:22:35.580 "ddgst": ${ddgst:-false} 00:22:35.580 }, 00:22:35.580 "method": "bdev_nvme_attach_controller" 00:22:35.580 } 00:22:35.580 EOF 00:22:35.580 )") 00:22:35.580 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.841 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.841 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.841 { 00:22:35.841 "params": { 00:22:35.841 "name": "Nvme$subsystem", 00:22:35.841 "trtype": "$TEST_TRANSPORT", 00:22:35.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.841 "adrfam": "ipv4", 00:22:35.841 "trsvcid": "$NVMF_PORT", 00:22:35.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.841 "hdgst": ${hdgst:-false}, 00:22:35.841 "ddgst": ${ddgst:-false} 00:22:35.841 }, 00:22:35.841 "method": "bdev_nvme_attach_controller" 00:22:35.841 } 00:22:35.841 EOF 00:22:35.841 )") 00:22:35.841 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.841 [2024-11-20 07:36:53.794552] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:22:35.841 [2024-11-20 07:36:53.794609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3465443 ] 00:22:35.841 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.841 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.841 { 00:22:35.841 "params": { 00:22:35.841 "name": "Nvme$subsystem", 00:22:35.841 "trtype": "$TEST_TRANSPORT", 00:22:35.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.841 "adrfam": "ipv4", 00:22:35.841 "trsvcid": "$NVMF_PORT", 00:22:35.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.841 "hdgst": ${hdgst:-false}, 00:22:35.841 "ddgst": ${ddgst:-false} 00:22:35.841 }, 00:22:35.841 "method": "bdev_nvme_attach_controller" 00:22:35.841 } 00:22:35.841 EOF 00:22:35.841 )") 00:22:35.841 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.841 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.841 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.841 { 00:22:35.841 "params": { 00:22:35.841 "name": "Nvme$subsystem", 00:22:35.841 "trtype": "$TEST_TRANSPORT", 00:22:35.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.841 "adrfam": "ipv4", 00:22:35.841 "trsvcid": "$NVMF_PORT", 00:22:35.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.841 "hdgst": ${hdgst:-false}, 00:22:35.841 "ddgst": ${ddgst:-false} 00:22:35.841 }, 00:22:35.841 "method": "bdev_nvme_attach_controller" 00:22:35.841 } 00:22:35.841 EOF 00:22:35.841 )") 00:22:35.841 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.841 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.841 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.841 { 00:22:35.841 "params": { 00:22:35.841 "name": "Nvme$subsystem", 00:22:35.841 "trtype": "$TEST_TRANSPORT", 00:22:35.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.841 "adrfam": "ipv4", 00:22:35.841 "trsvcid": "$NVMF_PORT", 00:22:35.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.841 "hdgst": ${hdgst:-false}, 00:22:35.841 "ddgst": ${ddgst:-false} 00:22:35.841 }, 00:22:35.841 "method": "bdev_nvme_attach_controller" 00:22:35.841 } 00:22:35.841 EOF 00:22:35.841 )") 00:22:35.841 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.841 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.841 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.841 { 00:22:35.841 "params": { 00:22:35.841 "name": "Nvme$subsystem", 00:22:35.841 "trtype": "$TEST_TRANSPORT", 00:22:35.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.841 "adrfam": "ipv4", 00:22:35.841 "trsvcid": "$NVMF_PORT", 00:22:35.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.841 "hdgst": ${hdgst:-false}, 00:22:35.841 "ddgst": ${ddgst:-false} 00:22:35.841 }, 00:22:35.842 "method": "bdev_nvme_attach_controller" 00:22:35.842 } 00:22:35.842 EOF 00:22:35.842 )") 00:22:35.842 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:35.842 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:35.842 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:35.842 07:36:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:35.842 "params": { 00:22:35.842 "name": "Nvme1", 00:22:35.842 "trtype": "tcp", 00:22:35.842 "traddr": "10.0.0.2", 00:22:35.842 "adrfam": "ipv4", 00:22:35.842 "trsvcid": "4420", 00:22:35.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:35.842 "hdgst": false, 00:22:35.842 "ddgst": false 00:22:35.842 }, 00:22:35.842 "method": "bdev_nvme_attach_controller" 00:22:35.842 },{ 00:22:35.842 "params": { 00:22:35.842 "name": "Nvme2", 00:22:35.842 "trtype": "tcp", 00:22:35.842 "traddr": "10.0.0.2", 00:22:35.842 "adrfam": "ipv4", 00:22:35.842 "trsvcid": "4420", 00:22:35.842 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:35.842 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:35.842 "hdgst": false, 00:22:35.842 "ddgst": false 00:22:35.842 }, 00:22:35.842 "method": "bdev_nvme_attach_controller" 00:22:35.842 },{ 00:22:35.842 "params": { 00:22:35.842 "name": "Nvme3", 00:22:35.842 "trtype": "tcp", 00:22:35.842 "traddr": "10.0.0.2", 00:22:35.842 "adrfam": "ipv4", 00:22:35.842 "trsvcid": "4420", 00:22:35.842 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:35.842 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:35.842 "hdgst": false, 00:22:35.842 "ddgst": false 00:22:35.842 }, 00:22:35.842 "method": "bdev_nvme_attach_controller" 00:22:35.842 },{ 00:22:35.842 "params": { 00:22:35.842 "name": "Nvme4", 00:22:35.842 "trtype": "tcp", 00:22:35.842 "traddr": "10.0.0.2", 00:22:35.842 "adrfam": "ipv4", 00:22:35.842 "trsvcid": "4420", 00:22:35.842 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:35.842 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:35.842 "hdgst": false, 00:22:35.842 "ddgst": false 00:22:35.842 }, 00:22:35.842 "method": "bdev_nvme_attach_controller" 00:22:35.842 },{ 00:22:35.842 "params": { 00:22:35.842 "name": "Nvme5", 00:22:35.842 "trtype": "tcp", 00:22:35.842 "traddr": "10.0.0.2", 00:22:35.842 "adrfam": "ipv4", 00:22:35.842 "trsvcid": "4420", 00:22:35.842 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:35.842 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:35.842 "hdgst": false, 00:22:35.842 "ddgst": false 00:22:35.842 }, 00:22:35.842 "method": "bdev_nvme_attach_controller" 00:22:35.842 },{ 00:22:35.842 "params": { 00:22:35.842 "name": "Nvme6", 00:22:35.842 "trtype": "tcp", 00:22:35.842 "traddr": "10.0.0.2", 00:22:35.842 "adrfam": "ipv4", 00:22:35.842 "trsvcid": "4420", 00:22:35.842 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:35.842 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:35.842 "hdgst": false, 00:22:35.842 "ddgst": false 00:22:35.842 }, 00:22:35.842 "method": "bdev_nvme_attach_controller" 00:22:35.842 },{ 00:22:35.842 "params": { 00:22:35.842 "name": "Nvme7", 00:22:35.842 "trtype": "tcp", 00:22:35.842 "traddr": "10.0.0.2", 00:22:35.842 "adrfam": "ipv4", 00:22:35.842 "trsvcid": "4420", 00:22:35.842 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:35.842 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:35.842 "hdgst": false, 00:22:35.842 "ddgst": false 00:22:35.842 }, 00:22:35.842 "method": "bdev_nvme_attach_controller" 00:22:35.842 },{ 00:22:35.842 "params": { 00:22:35.842 "name": "Nvme8", 00:22:35.842 "trtype": "tcp", 00:22:35.842 "traddr": "10.0.0.2", 00:22:35.842 "adrfam": "ipv4", 00:22:35.842 "trsvcid": "4420", 00:22:35.842 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:35.842 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:35.842 "hdgst": false, 00:22:35.842 "ddgst": false 00:22:35.842 }, 00:22:35.842 "method": "bdev_nvme_attach_controller" 00:22:35.842 },{ 00:22:35.842 "params": { 00:22:35.842 "name": "Nvme9", 00:22:35.842 "trtype": "tcp", 00:22:35.842 "traddr": "10.0.0.2", 00:22:35.842 "adrfam": "ipv4", 00:22:35.842 "trsvcid": "4420", 00:22:35.842 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:35.842 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:35.842 "hdgst": false, 00:22:35.842 "ddgst": false 00:22:35.842 }, 00:22:35.842 "method": "bdev_nvme_attach_controller" 00:22:35.842 },{ 00:22:35.842 "params": { 00:22:35.842 "name": "Nvme10", 00:22:35.842 "trtype": "tcp", 00:22:35.842 "traddr": "10.0.0.2", 00:22:35.842 "adrfam": "ipv4", 00:22:35.842 "trsvcid": "4420", 00:22:35.842 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:35.842 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:35.842 "hdgst": false, 00:22:35.842 "ddgst": false 00:22:35.842 }, 00:22:35.842 "method": "bdev_nvme_attach_controller" 00:22:35.842 }' 00:22:35.842 [2024-11-20 07:36:53.886462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.842 [2024-11-20 07:36:53.922812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.848 Running I/O for 10 seconds... 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:37.848 07:36:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:38.108 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:38.108 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:38.108 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:38.108 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:38.108 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.108 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.108 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.108 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:38.108 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:38.108 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3465443 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3465443 ']' 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3465443 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3465443 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3465443' 00:22:38.370 killing process with pid 3465443 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3465443 00:22:38.370 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3465443 00:22:38.370 Received shutdown signal, test time was about 0.973716 seconds 00:22:38.370 00:22:38.370 Latency(us) 00:22:38.370 [2024-11-20T06:36:56.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.370 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.370 Verification LBA range: start 0x0 length 0x400 00:22:38.370 Nvme1n1 : 0.94 203.93 12.75 0.00 0.00 309809.21 14745.60 248162.99 00:22:38.370 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.370 Verification LBA range: start 0x0 length 0x400 00:22:38.370 Nvme2n1 : 0.97 264.59 16.54 0.00 0.00 234056.53 34515.63 253405.87 00:22:38.370 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.370 Verification LBA range: start 0x0 length 0x400 00:22:38.370 Nvme3n1 : 0.96 266.54 16.66 0.00 0.00 227896.96 21408.43 200977.07 00:22:38.370 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.370 Verification LBA range: start 0x0 length 0x400 00:22:38.370 Nvme4n1 : 0.96 266.28 16.64 0.00 0.00 222296.32 16165.55 260396.37 00:22:38.370 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.370 Verification LBA range: start 0x0 length 0x400 00:22:38.370 Nvme5n1 : 0.97 264.33 16.52 0.00 0.00 220117.97 21408.43 246415.36 00:22:38.370 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.370 Verification LBA range: start 0x0 length 0x400 00:22:38.370 Nvme6n1 : 0.95 214.65 13.42 0.00 0.00 262205.27 6034.77 242920.11 00:22:38.370 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.370 Verification LBA range: start 0x0 length 0x400 00:22:38.370 Nvme7n1 : 0.94 211.09 13.19 0.00 0.00 261043.42 2662.40 221948.59 00:22:38.370 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.370 Verification LBA range: start 0x0 length 0x400 00:22:38.370 Nvme8n1 : 0.97 263.15 16.45 0.00 0.00 207276.59 20097.71 237677.23 00:22:38.370 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.370 Verification LBA range: start 0x0 length 0x400 00:22:38.370 Nvme9n1 : 0.95 268.30 16.77 0.00 0.00 197800.11 21626.88 219327.15 00:22:38.370 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.370 Verification LBA range: start 0x0 length 0x400 00:22:38.370 Nvme10n1 : 0.96 200.39 12.52 0.00 0.00 258506.81 15400.96 263891.63 00:22:38.370 [2024-11-20T06:36:56.580Z] =================================================================================================================== 00:22:38.370 [2024-11-20T06:36:56.580Z] Total : 2423.24 151.45 0.00 0.00 236643.14 2662.40 263891.63 00:22:38.632 07:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3465215 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:39.575 rmmod nvme_tcp 00:22:39.575 rmmod nvme_fabrics 00:22:39.575 rmmod nvme_keyring 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3465215 ']' 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3465215 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3465215 ']' 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3465215 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:39.575 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3465215 00:22:39.836 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:39.836 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:39.836 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3465215' 00:22:39.836 killing process with pid 3465215 00:22:39.836 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3465215 00:22:39.836 07:36:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3465215 00:22:40.097 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:40.097 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:40.097 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:40.097 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:40.097 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:40.097 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:40.097 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:40.097 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:40.097 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:40.097 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.097 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.097 07:36:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.012 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:42.012 00:22:42.012 real 0m8.198s 00:22:42.012 user 0m25.113s 00:22:42.012 sys 0m1.356s 00:22:42.012 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:42.012 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.012 ************************************ 00:22:42.013 END TEST nvmf_shutdown_tc2 00:22:42.013 ************************************ 00:22:42.013 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:42.013 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:42.013 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:42.013 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:42.275 ************************************ 00:22:42.275 START TEST nvmf_shutdown_tc3 00:22:42.275 ************************************ 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.275 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:42.276 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:42.276 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:42.276 Found net devices under 0000:31:00.0: cvl_0_0 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:42.276 Found net devices under 0000:31:00.1: cvl_0_1 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:42.276 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:42.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:22:42.538 00:22:42.538 --- 10.0.0.2 ping statistics --- 00:22:42.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.538 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:22:42.538 00:22:42.538 --- 10.0.0.1 ping statistics --- 00:22:42.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.538 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3466829 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3466829 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3466829 ']' 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:42.538 07:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.538 [2024-11-20 07:37:00.681085] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:22:42.538 [2024-11-20 07:37:00.681147] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.799 [2024-11-20 07:37:00.777117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:42.799 [2024-11-20 07:37:00.811318] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.799 [2024-11-20 07:37:00.811350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.799 [2024-11-20 07:37:00.811360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.799 [2024-11-20 07:37:00.811364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.799 [2024-11-20 07:37:00.811369] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.799 [2024-11-20 07:37:00.812719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.799 [2024-11-20 07:37:00.812871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:42.799 [2024-11-20 07:37:00.813021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.799 [2024-11-20 07:37:00.813023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:43.370 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:43.370 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:43.370 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:43.370 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.370 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:43.370 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:43.371 [2024-11-20 07:37:01.528737] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.371 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.631 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.631 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.631 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.631 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.632 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.632 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.632 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:43.632 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.632 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:43.632 Malloc1 00:22:43.632 [2024-11-20 07:37:01.642622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.632 Malloc2 00:22:43.632 Malloc3 00:22:43.632 Malloc4 00:22:43.632 Malloc5 00:22:43.632 Malloc6 00:22:43.893 Malloc7 00:22:43.893 Malloc8 00:22:43.893 Malloc9 00:22:43.893 Malloc10 00:22:43.893 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.893 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:43.893 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.893 07:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:43.893 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3467166 00:22:43.893 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3467166 /var/tmp/bdevperf.sock 00:22:43.893 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3467166 ']' 00:22:43.893 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.893 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:43.893 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.893 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:43.893 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:43.893 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:43.893 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.894 { 00:22:43.894 "params": { 00:22:43.894 "name": "Nvme$subsystem", 00:22:43.894 "trtype": "$TEST_TRANSPORT", 00:22:43.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.894 "adrfam": "ipv4", 00:22:43.894 "trsvcid": "$NVMF_PORT", 00:22:43.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.894 "hdgst": ${hdgst:-false}, 00:22:43.894 "ddgst": ${ddgst:-false} 00:22:43.894 }, 00:22:43.894 "method": "bdev_nvme_attach_controller" 00:22:43.894 } 00:22:43.894 EOF 00:22:43.894 )") 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.894 { 00:22:43.894 "params": { 00:22:43.894 "name": "Nvme$subsystem", 00:22:43.894 "trtype": "$TEST_TRANSPORT", 00:22:43.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.894 "adrfam": "ipv4", 00:22:43.894 "trsvcid": "$NVMF_PORT", 00:22:43.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.894 "hdgst": ${hdgst:-false}, 00:22:43.894 "ddgst": ${ddgst:-false} 00:22:43.894 }, 00:22:43.894 "method": "bdev_nvme_attach_controller" 00:22:43.894 } 00:22:43.894 EOF 00:22:43.894 )") 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.894 { 00:22:43.894 "params": { 00:22:43.894 "name": "Nvme$subsystem", 00:22:43.894 "trtype": "$TEST_TRANSPORT", 00:22:43.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.894 "adrfam": "ipv4", 00:22:43.894 "trsvcid": "$NVMF_PORT", 00:22:43.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.894 "hdgst": ${hdgst:-false}, 00:22:43.894 "ddgst": ${ddgst:-false} 00:22:43.894 }, 00:22:43.894 "method": "bdev_nvme_attach_controller" 00:22:43.894 } 00:22:43.894 EOF 00:22:43.894 )") 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.894 { 00:22:43.894 "params": { 00:22:43.894 "name": "Nvme$subsystem", 00:22:43.894 "trtype": "$TEST_TRANSPORT", 00:22:43.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.894 "adrfam": "ipv4", 00:22:43.894 "trsvcid": "$NVMF_PORT", 00:22:43.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.894 "hdgst": ${hdgst:-false}, 00:22:43.894 "ddgst": ${ddgst:-false} 00:22:43.894 }, 00:22:43.894 "method": "bdev_nvme_attach_controller" 00:22:43.894 } 00:22:43.894 EOF 00:22:43.894 )") 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.894 { 00:22:43.894 "params": { 00:22:43.894 "name": "Nvme$subsystem", 00:22:43.894 "trtype": "$TEST_TRANSPORT", 00:22:43.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.894 "adrfam": "ipv4", 00:22:43.894 "trsvcid": "$NVMF_PORT", 00:22:43.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.894 "hdgst": ${hdgst:-false}, 00:22:43.894 "ddgst": ${ddgst:-false} 00:22:43.894 }, 00:22:43.894 "method": "bdev_nvme_attach_controller" 00:22:43.894 } 00:22:43.894 EOF 00:22:43.894 )") 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.894 { 00:22:43.894 "params": { 00:22:43.894 "name": "Nvme$subsystem", 00:22:43.894 "trtype": "$TEST_TRANSPORT", 00:22:43.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.894 "adrfam": "ipv4", 00:22:43.894 "trsvcid": "$NVMF_PORT", 00:22:43.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.894 "hdgst": ${hdgst:-false}, 00:22:43.894 "ddgst": ${ddgst:-false} 00:22:43.894 }, 00:22:43.894 "method": "bdev_nvme_attach_controller" 00:22:43.894 } 00:22:43.894 EOF 00:22:43.894 )") 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:43.894 [2024-11-20 07:37:02.087076] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:22:43.894 [2024-11-20 07:37:02.087128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3467166 ] 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.894 { 00:22:43.894 "params": { 00:22:43.894 "name": "Nvme$subsystem", 00:22:43.894 "trtype": "$TEST_TRANSPORT", 00:22:43.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.894 "adrfam": "ipv4", 00:22:43.894 "trsvcid": "$NVMF_PORT", 00:22:43.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.894 "hdgst": ${hdgst:-false}, 00:22:43.894 "ddgst": ${ddgst:-false} 00:22:43.894 }, 00:22:43.894 "method": "bdev_nvme_attach_controller" 00:22:43.894 } 00:22:43.894 EOF 00:22:43.894 )") 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.894 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.894 { 00:22:43.894 "params": { 00:22:43.894 "name": "Nvme$subsystem", 00:22:43.894 "trtype": "$TEST_TRANSPORT", 00:22:43.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.894 "adrfam": "ipv4", 00:22:43.894 "trsvcid": "$NVMF_PORT", 00:22:43.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.894 "hdgst": ${hdgst:-false}, 00:22:43.894 "ddgst": ${ddgst:-false} 00:22:43.894 }, 00:22:43.894 "method": "bdev_nvme_attach_controller" 00:22:43.894 } 00:22:43.894 EOF 00:22:43.894 )") 00:22:44.156 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.156 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.156 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.156 { 00:22:44.156 "params": { 00:22:44.156 "name": "Nvme$subsystem", 00:22:44.156 "trtype": "$TEST_TRANSPORT", 00:22:44.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.156 "adrfam": "ipv4", 00:22:44.156 "trsvcid": "$NVMF_PORT", 00:22:44.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.156 "hdgst": ${hdgst:-false}, 00:22:44.156 "ddgst": ${ddgst:-false} 00:22:44.156 }, 00:22:44.156 "method": "bdev_nvme_attach_controller" 00:22:44.156 } 00:22:44.156 EOF 00:22:44.156 )") 00:22:44.156 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.156 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.156 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.156 { 00:22:44.156 "params": { 00:22:44.156 "name": "Nvme$subsystem", 00:22:44.156 "trtype": "$TEST_TRANSPORT", 00:22:44.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.156 "adrfam": "ipv4", 00:22:44.156 "trsvcid": "$NVMF_PORT", 00:22:44.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.156 "hdgst": ${hdgst:-false}, 00:22:44.156 "ddgst": ${ddgst:-false} 00:22:44.156 }, 00:22:44.156 "method": "bdev_nvme_attach_controller" 00:22:44.156 } 00:22:44.156 EOF 00:22:44.156 )") 00:22:44.156 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.156 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:44.156 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:44.156 07:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:44.156 "params": { 00:22:44.156 "name": "Nvme1", 00:22:44.156 "trtype": "tcp", 00:22:44.156 "traddr": "10.0.0.2", 00:22:44.156 "adrfam": "ipv4", 00:22:44.156 "trsvcid": "4420", 00:22:44.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.156 "hdgst": false, 00:22:44.156 "ddgst": false 00:22:44.156 }, 00:22:44.156 "method": "bdev_nvme_attach_controller" 00:22:44.156 },{ 00:22:44.156 "params": { 00:22:44.156 "name": "Nvme2", 00:22:44.156 "trtype": "tcp", 00:22:44.156 "traddr": "10.0.0.2", 00:22:44.156 "adrfam": "ipv4", 00:22:44.156 "trsvcid": "4420", 00:22:44.156 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:44.156 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:44.156 "hdgst": false, 00:22:44.156 "ddgst": false 00:22:44.156 }, 00:22:44.156 "method": "bdev_nvme_attach_controller" 00:22:44.156 },{ 00:22:44.156 "params": { 00:22:44.156 "name": "Nvme3", 00:22:44.156 "trtype": "tcp", 00:22:44.156 "traddr": "10.0.0.2", 00:22:44.156 "adrfam": "ipv4", 00:22:44.156 "trsvcid": "4420", 00:22:44.156 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:44.156 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:44.156 "hdgst": false, 00:22:44.156 "ddgst": false 00:22:44.156 }, 00:22:44.156 "method": "bdev_nvme_attach_controller" 00:22:44.156 },{ 00:22:44.156 "params": { 00:22:44.156 "name": "Nvme4", 00:22:44.156 "trtype": "tcp", 00:22:44.156 "traddr": "10.0.0.2", 00:22:44.156 "adrfam": "ipv4", 00:22:44.156 "trsvcid": "4420", 00:22:44.156 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:44.156 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:44.157 "hdgst": false, 00:22:44.157 "ddgst": false 00:22:44.157 }, 00:22:44.157 "method": "bdev_nvme_attach_controller" 00:22:44.157 },{ 00:22:44.157 "params": { 00:22:44.157 "name": "Nvme5", 00:22:44.157 "trtype": "tcp", 00:22:44.157 "traddr": "10.0.0.2", 00:22:44.157 "adrfam": "ipv4", 00:22:44.157 "trsvcid": "4420", 00:22:44.157 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:44.157 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:44.157 "hdgst": false, 00:22:44.157 "ddgst": false 00:22:44.157 }, 00:22:44.157 "method": "bdev_nvme_attach_controller" 00:22:44.157 },{ 00:22:44.157 "params": { 00:22:44.157 "name": "Nvme6", 00:22:44.157 "trtype": "tcp", 00:22:44.157 "traddr": "10.0.0.2", 00:22:44.157 "adrfam": "ipv4", 00:22:44.157 "trsvcid": "4420", 00:22:44.157 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:44.157 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:44.157 "hdgst": false, 00:22:44.157 "ddgst": false 00:22:44.157 }, 00:22:44.157 "method": "bdev_nvme_attach_controller" 00:22:44.157 },{ 00:22:44.157 "params": { 00:22:44.157 "name": "Nvme7", 00:22:44.157 "trtype": "tcp", 00:22:44.157 "traddr": "10.0.0.2", 00:22:44.157 "adrfam": "ipv4", 00:22:44.157 "trsvcid": "4420", 00:22:44.157 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:44.157 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:44.157 "hdgst": false, 00:22:44.157 "ddgst": false 00:22:44.157 }, 00:22:44.157 "method": "bdev_nvme_attach_controller" 00:22:44.157 },{ 00:22:44.157 "params": { 00:22:44.157 "name": "Nvme8", 00:22:44.157 "trtype": "tcp", 00:22:44.157 "traddr": "10.0.0.2", 00:22:44.157 "adrfam": "ipv4", 00:22:44.157 "trsvcid": "4420", 00:22:44.157 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:44.157 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:44.157 "hdgst": false, 00:22:44.157 "ddgst": false 00:22:44.157 }, 00:22:44.157 "method": "bdev_nvme_attach_controller" 00:22:44.157 },{ 00:22:44.157 "params": { 00:22:44.157 "name": "Nvme9", 00:22:44.157 "trtype": "tcp", 00:22:44.157 "traddr": "10.0.0.2", 00:22:44.157 "adrfam": "ipv4", 00:22:44.157 "trsvcid": "4420", 00:22:44.157 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:44.157 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:44.157 "hdgst": false, 00:22:44.157 "ddgst": false 00:22:44.157 }, 00:22:44.157 "method": "bdev_nvme_attach_controller" 00:22:44.157 },{ 00:22:44.157 "params": { 00:22:44.157 "name": "Nvme10", 00:22:44.157 "trtype": "tcp", 00:22:44.157 "traddr": "10.0.0.2", 00:22:44.157 "adrfam": "ipv4", 00:22:44.157 "trsvcid": "4420", 00:22:44.157 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:44.157 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:44.157 "hdgst": false, 00:22:44.157 "ddgst": false 00:22:44.157 }, 00:22:44.157 "method": "bdev_nvme_attach_controller" 00:22:44.157 }' 00:22:44.157 [2024-11-20 07:37:02.177697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.157 [2024-11-20 07:37:02.213983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.542 Running I/O for 10 seconds... 00:22:45.542 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:45.542 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:45.542 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:45.542 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.542 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.803 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.803 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:45.803 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:45.803 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:45.803 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:45.803 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:45.803 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:45.803 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:45.803 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:45.803 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:45.803 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:45.803 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.803 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.803 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.803 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:45.803 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:45.803 07:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:46.064 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:46.064 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:46.064 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:46.064 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:46.064 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.064 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.064 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.064 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:46.064 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:46.064 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:46.326 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:46.326 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:46.326 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:46.326 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:46.326 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.326 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.603 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.603 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:46.603 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:46.603 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:46.603 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:46.603 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:46.603 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3466829 00:22:46.603 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3466829 ']' 00:22:46.603 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3466829 00:22:46.603 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:22:46.603 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:46.603 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3466829 00:22:46.603 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:46.603 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:46.603 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3466829' 00:22:46.603 killing process with pid 3466829 00:22:46.603 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 3466829 00:22:46.603 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 3466829 00:22:46.603 [2024-11-20 07:37:04.645850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.645999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.603 [2024-11-20 07:37:04.646101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.646194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd6fa0 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.604 [2024-11-20 07:37:04.647452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.647456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241970 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.649673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd7940 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.605 [2024-11-20 07:37:04.650980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.650985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.650989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.650994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.650999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8300 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.651994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.652003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.652007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.652012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.606 [2024-11-20 07:37:04.652017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd87d0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.652998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.653003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.653008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.653013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.653018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.653023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.653028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.653033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.607 [2024-11-20 07:37:04.653038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8ca0 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.653996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.654000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.658240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.608 [2024-11-20 07:37:04.658275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.608 [2024-11-20 07:37:04.658286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.608 [2024-11-20 07:37:04.658293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.608 [2024-11-20 07:37:04.658302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.608 [2024-11-20 07:37:04.658309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.608 [2024-11-20 07:37:04.658318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.608 [2024-11-20 07:37:04.658325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.608 [2024-11-20 07:37:04.658332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf6cd0 is same with the state(6) to be set 00:22:46.608 [2024-11-20 07:37:04.658365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.608 [2024-11-20 07:37:04.658374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.608 [2024-11-20 07:37:04.658382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.608 [2024-11-20 07:37:04.658389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.608 [2024-11-20 07:37:04.658398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.608 [2024-11-20 07:37:04.658407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.608 [2024-11-20 07:37:04.658415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.608 [2024-11-20 07:37:04.658423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.608 [2024-11-20 07:37:04.658430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43ea0 is same with the state(6) to be set 00:22:46.609 [2024-11-20 07:37:04.658464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9befd0 is same with the state(6) to be set 00:22:46.609 [2024-11-20 07:37:04.658555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf4a10 is same with the state(6) to be set 00:22:46.609 [2024-11-20 07:37:04.658639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfb0a0 is same with the state(6) to be set 00:22:46.609 [2024-11-20 07:37:04.658730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1b0 is same with the state(6) to be set 00:22:46.609 [2024-11-20 07:37:04.658824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1610 is same with the state(6) to be set 00:22:46.609 [2024-11-20 07:37:04.658910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.658965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.658973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cad30 is same with the state(6) to be set 00:22:46.609 [2024-11-20 07:37:04.658997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.659007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.659016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.659023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.659031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.659039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.659049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.609 [2024-11-20 07:37:04.659059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.659066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ca270 is same with the state(6) to be set 00:22:46.609 [2024-11-20 07:37:04.659417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.609 [2024-11-20 07:37:04.659439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.659456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.609 [2024-11-20 07:37:04.659465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.659475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.609 [2024-11-20 07:37:04.659483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.659492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.609 [2024-11-20 07:37:04.659500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.659509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.609 [2024-11-20 07:37:04.659517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.659527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.609 [2024-11-20 07:37:04.659535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.659545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.609 [2024-11-20 07:37:04.659553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.609 [2024-11-20 07:37:04.659563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.609 [2024-11-20 07:37:04.659570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.659990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.659998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.660008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.660016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.660026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.660035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.660044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.660051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.660060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.660068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.660079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.660087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.660097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.660104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.660113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.660121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.660130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.660137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.660147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.660153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.660164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.660172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.660182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.660190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.660200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.660207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.660217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.660224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.660233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.660241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.660253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.610 [2024-11-20 07:37:04.660263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.610 [2024-11-20 07:37:04.660274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.660281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.660290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.660297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.660306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.660314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.660324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.660332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.660342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.660349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.660359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.660366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.660375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.660382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.660392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.660400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.660410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.660418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.660428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.660435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.660445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.660453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.660462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.660471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.660481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.660489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.660500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.660507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.660517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.660524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.660533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.660540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.660551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.660558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.660588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:46.611 [2024-11-20 07:37:04.661094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.611 [2024-11-20 07:37:04.661483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.611 [2024-11-20 07:37:04.661490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.661499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.661507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.661516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.661524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.661534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.661542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.661553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.661560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.661570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.661577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.661586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.661595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.661605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.661613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.661623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.661631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.661640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.661647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.661657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.661665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.661675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.661682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.661691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.661698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.666937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.612 [2024-11-20 07:37:04.666962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.612 [2024-11-20 07:37:04.666971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.612 [2024-11-20 07:37:04.666980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.612 [2024-11-20 07:37:04.666987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.612 [2024-11-20 07:37:04.666995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.612 [2024-11-20 07:37:04.667001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.612 [2024-11-20 07:37:04.667007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.612 [2024-11-20 07:37:04.667014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.612 [2024-11-20 07:37:04.667020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.612 [2024-11-20 07:37:04.667026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.612 [2024-11-20 07:37:04.667032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.612 [2024-11-20 07:37:04.667038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2240f90 is same with the state(6) to be set 00:22:46.612 [2024-11-20 07:37:04.675655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.675686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.675699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.675708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.675719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.675727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.675737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.675753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.675768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.675777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.675787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.675795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.675805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.675813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.675822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.675830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.675840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.675848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.675859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.675866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.675876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.675883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.675892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.675901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.675911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.675919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.675929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.675936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.675946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.675953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.612 [2024-11-20 07:37:04.675964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.612 [2024-11-20 07:37:04.675972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.675982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.675996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:46.613 [2024-11-20 07:37:04.676875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.676985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.676994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.613 [2024-11-20 07:37:04.677306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.613 [2024-11-20 07:37:04.677317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.614 [2024-11-20 07:37:04.677978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.614 [2024-11-20 07:37:04.677988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.677996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.678007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.678015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.678037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:46.615 [2024-11-20 07:37:04.678171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf6cd0 (9): Bad file descriptor 00:22:46.615 [2024-11-20 07:37:04.678199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe43ea0 (9): Bad file descriptor 00:22:46.615 [2024-11-20 07:37:04.678228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.615 [2024-11-20 07:37:04.678238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.678247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.615 [2024-11-20 07:37:04.678254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.678263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.615 [2024-11-20 07:37:04.678271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.678280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.615 [2024-11-20 07:37:04.678288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.678297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2aba0 is same with the state(6) to be set 00:22:46.615 [2024-11-20 07:37:04.678310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9befd0 (9): Bad file descriptor 00:22:46.615 [2024-11-20 07:37:04.678323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf4a10 (9): Bad file descriptor 00:22:46.615 [2024-11-20 07:37:04.678340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdfb0a0 (9): Bad file descriptor 00:22:46.615 [2024-11-20 07:37:04.678354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cb1b0 (9): Bad file descriptor 00:22:46.615 [2024-11-20 07:37:04.678370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e1610 (9): Bad file descriptor 00:22:46.615 [2024-11-20 07:37:04.678382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cad30 (9): Bad file descriptor 00:22:46.615 [2024-11-20 07:37:04.678399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ca270 (9): Bad file descriptor 00:22:46.615 [2024-11-20 07:37:04.682296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:46.615 [2024-11-20 07:37:04.682717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:46.615 [2024-11-20 07:37:04.682769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:46.615 [2024-11-20 07:37:04.682791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2aba0 (9): Bad file descriptor 00:22:46.615 [2024-11-20 07:37:04.683138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.615 [2024-11-20 07:37:04.683178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9befd0 with addr=10.0.0.2, port=4420 00:22:46.615 [2024-11-20 07:37:04.683190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9befd0 is same with the state(6) to be set 00:22:46.615 [2024-11-20 07:37:04.684344] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:46.615 [2024-11-20 07:37:04.684573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.615 [2024-11-20 07:37:04.684589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ca270 with addr=10.0.0.2, port=4420 00:22:46.615 [2024-11-20 07:37:04.684598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ca270 is same with the state(6) to be set 00:22:46.615 [2024-11-20 07:37:04.684626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9befd0 (9): Bad file descriptor 00:22:46.615 [2024-11-20 07:37:04.684667] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:46.615 [2024-11-20 07:37:04.684715] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:46.615 [2024-11-20 07:37:04.684770] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:46.615 [2024-11-20 07:37:04.684808] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:46.615 [2024-11-20 07:37:04.684917] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:46.615 [2024-11-20 07:37:04.684969] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:46.615 [2024-11-20 07:37:04.685204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.615 [2024-11-20 07:37:04.685220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2aba0 with addr=10.0.0.2, port=4420 00:22:46.615 [2024-11-20 07:37:04.685228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2aba0 is same with the state(6) to be set 00:22:46.615 [2024-11-20 07:37:04.685239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ca270 (9): Bad file descriptor 00:22:46.615 [2024-11-20 07:37:04.685248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:46.615 [2024-11-20 07:37:04.685257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:46.615 [2024-11-20 07:37:04.685267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:46.615 [2024-11-20 07:37:04.685277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:46.615 [2024-11-20 07:37:04.685323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.685351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.685370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.685393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.685412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.685431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.685448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.685464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.685482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.685499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.685519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.685537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.685554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.685572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.685591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.685607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.685627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.615 [2024-11-20 07:37:04.685646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.615 [2024-11-20 07:37:04.685653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.685983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.685991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.686000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.686009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.686019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.686027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.686038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.686046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.686057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.686064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.686074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.686084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.686093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.686102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.686111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.686120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.686131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.686139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.686149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.686158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.686168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.686176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.686185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.686193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.686206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.686215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.686225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.686232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.686241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.686249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.686259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.686268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.686278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.616 [2024-11-20 07:37:04.686285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.616 [2024-11-20 07:37:04.686294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef4de0 is same with the state(6) to be set 00:22:46.617 [2024-11-20 07:37:04.686604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.686991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.686999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.687010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.687018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.687027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.687034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.687043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.687050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.687062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.687070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.687080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.687087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.617 [2024-11-20 07:37:04.687098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.617 [2024-11-20 07:37:04.687106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.618 [2024-11-20 07:37:04.687768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.618 [2024-11-20 07:37:04.687777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac89e0 is same with the state(6) to be set 00:22:46.618 [2024-11-20 07:37:04.687877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2aba0 (9): Bad file descriptor 00:22:46.618 [2024-11-20 07:37:04.687893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:46.618 [2024-11-20 07:37:04.687900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:46.618 [2024-11-20 07:37:04.687909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:46.619 [2024-11-20 07:37:04.687916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:46.619 [2024-11-20 07:37:04.690380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:46.619 [2024-11-20 07:37:04.690400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:46.619 [2024-11-20 07:37:04.690427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:46.619 [2024-11-20 07:37:04.690436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:46.619 [2024-11-20 07:37:04.690444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:46.619 [2024-11-20 07:37:04.690453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:46.619 [2024-11-20 07:37:04.691042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.619 [2024-11-20 07:37:04.691082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cad30 with addr=10.0.0.2, port=4420 00:22:46.619 [2024-11-20 07:37:04.691094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cad30 is same with the state(6) to be set 00:22:46.619 [2024-11-20 07:37:04.691460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.619 [2024-11-20 07:37:04.691472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8e1610 with addr=10.0.0.2, port=4420 00:22:46.619 [2024-11-20 07:37:04.691480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1610 is same with the state(6) to be set 00:22:46.619 [2024-11-20 07:37:04.691522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.691989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.691997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.692008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.692016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.692026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.692034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.692043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.692052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.692063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.692071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.692081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.692090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.692099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.692106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.692116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.692123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.692133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.619 [2024-11-20 07:37:04.692142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.619 [2024-11-20 07:37:04.692152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.692693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.692702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef2800 is same with the state(6) to be set 00:22:46.620 [2024-11-20 07:37:04.694278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.694296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.694307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.694315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.694326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.694338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.694347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.694355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.694366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.694374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.694384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.694391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.694402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.694410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.694420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.620 [2024-11-20 07:37:04.694428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.620 [2024-11-20 07:37:04.694438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.694447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.694456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.694465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.694476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.694484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.694494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.694501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.694511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.694519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.694529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.694536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.694546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.694555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.694568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.694576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.694586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.694594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.694604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.694611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.694621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.694630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.694640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.694648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.703721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.703766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.703778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.703788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.703799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.703806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.703817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.703826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.703835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.703842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.703852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.703861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.703871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.703878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.703887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.703902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.703912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.703919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.703930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.703938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.703948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.703955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.703964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.703973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.703983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.703991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.704000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.704008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.704019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.704027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.704036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.704044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.704053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.704060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.704070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.704078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.704088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.704096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.704106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.704114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.704125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.704133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.704143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.704151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.621 [2024-11-20 07:37:04.704162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.621 [2024-11-20 07:37:04.704169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.704539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.704549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162d190 is same with the state(6) to be set 00:22:46.622 [2024-11-20 07:37:04.705895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.705912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.705932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.705940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.705950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.705959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.705968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.705976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.705987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.705995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.706004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.706012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.706022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.706029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.706040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.706048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.706057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.706065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.706076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.706083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.706093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.706102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.706111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.706118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.706128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.706137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.706146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.706155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.706164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.706172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.706183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.706190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.706200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.622 [2024-11-20 07:37:04.706209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.622 [2024-11-20 07:37:04.706218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.623 [2024-11-20 07:37:04.706927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.623 [2024-11-20 07:37:04.706934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.706944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.706953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.706963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.706970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.706980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.706988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.706997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.707006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.707016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.707024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.707034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.707043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.707053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.707063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.707072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ad20 is same with the state(6) to be set 00:22:46.624 [2024-11-20 07:37:04.708635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.708983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.708992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.709000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.709011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.709018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.709028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.709036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.709046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.709053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.709064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.709071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.709084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.709093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.709103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.709110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.709120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.709128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.709138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.709145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.709156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.709164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.709173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.709181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.624 [2024-11-20 07:37:04.709191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.624 [2024-11-20 07:37:04.709198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.625 [2024-11-20 07:37:04.709743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.625 [2024-11-20 07:37:04.709756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.709766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.709775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.709785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.709792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.709801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d16670 is same with the state(6) to be set 00:22:46.626 [2024-11-20 07:37:04.711081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.626 [2024-11-20 07:37:04.711732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.626 [2024-11-20 07:37:04.711741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.711753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.711763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.711771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.711780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.711788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.711799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.711806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.711815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.711823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.711833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.711841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.711850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.711858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.711868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.711881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.711892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.711899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.711909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.711917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.711928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.711935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.711945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.711955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.711964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.711973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.711983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.711991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.712001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.712009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.712018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.712026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.712035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.712042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.712052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.712060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.712070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.712078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.712088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.712095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.712105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.712113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.712124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.712131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.712141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.712149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.712158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.712166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.718207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.718246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.718258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.718266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.718277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.718285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.718295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.627 [2024-11-20 07:37:04.718302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.627 [2024-11-20 07:37:04.718311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd0980 is same with the state(6) to be set 00:22:46.627 [2024-11-20 07:37:04.720174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:46.627 [2024-11-20 07:37:04.720210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:46.627 [2024-11-20 07:37:04.720224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:46.627 [2024-11-20 07:37:04.720238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:46.627 [2024-11-20 07:37:04.720302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cad30 (9): Bad file descriptor 00:22:46.627 [2024-11-20 07:37:04.720317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e1610 (9): Bad file descriptor 00:22:46.627 [2024-11-20 07:37:04.720384] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:46.627 [2024-11-20 07:37:04.720407] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:46.627 [2024-11-20 07:37:04.720423] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:46.627 task offset: 30208 on job bdev=Nvme2n1 fails 00:22:46.627 00:22:46.627 Latency(us) 00:22:46.627 [2024-11-20T06:37:04.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.627 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.627 Job: Nvme1n1 ended in about 0.97 seconds with error 00:22:46.627 Verification LBA range: start 0x0 length 0x400 00:22:46.627 Nvme1n1 : 0.97 131.29 8.21 65.65 0.00 321501.58 21299.20 255153.49 00:22:46.627 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.627 Job: Nvme2n1 ended in about 0.96 seconds with error 00:22:46.627 Verification LBA range: start 0x0 length 0x400 00:22:46.627 Nvme2n1 : 0.96 199.85 12.49 66.62 0.00 232757.55 19333.12 237677.23 00:22:46.627 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.627 Job: Nvme3n1 ended in about 0.97 seconds with error 00:22:46.627 Verification LBA range: start 0x0 length 0x400 00:22:46.627 Nvme3n1 : 0.97 202.03 12.63 65.97 0.00 226867.20 5789.01 251658.24 00:22:46.627 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.627 Job: Nvme4n1 ended in about 0.96 seconds with error 00:22:46.627 Verification LBA range: start 0x0 length 0x400 00:22:46.627 Nvme4n1 : 0.96 199.60 12.47 66.53 0.00 223586.13 18896.21 263891.63 00:22:46.627 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.627 Job: Nvme5n1 ended in about 0.99 seconds with error 00:22:46.627 Verification LBA range: start 0x0 length 0x400 00:22:46.628 Nvme5n1 : 0.99 129.72 8.11 64.86 0.00 300175.64 21299.20 270882.13 00:22:46.628 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.628 Job: Nvme6n1 ended in about 0.99 seconds with error 00:22:46.628 Verification LBA range: start 0x0 length 0x400 00:22:46.628 Nvme6n1 : 0.99 129.39 8.09 64.70 0.00 294781.16 14964.05 279620.27 00:22:46.628 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.628 Job: Nvme7n1 ended in about 0.97 seconds with error 00:22:46.628 Verification LBA range: start 0x0 length 0x400 00:22:46.628 Nvme7n1 : 0.97 135.89 8.49 65.89 0.00 276722.14 17694.72 251658.24 00:22:46.628 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.628 Job: Nvme8n1 ended in about 0.99 seconds with error 00:22:46.628 Verification LBA range: start 0x0 length 0x400 00:22:46.628 Nvme8n1 : 0.99 198.59 12.41 64.52 0.00 208112.16 9284.27 253405.87 00:22:46.628 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.628 Job: Nvme9n1 ended in about 0.96 seconds with error 00:22:46.628 Verification LBA range: start 0x0 length 0x400 00:22:46.628 Nvme9n1 : 0.96 199.32 12.46 66.44 0.00 200319.79 19770.03 253405.87 00:22:46.628 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.628 Job: Nvme10n1 ended in about 1.00 seconds with error 00:22:46.628 Verification LBA range: start 0x0 length 0x400 00:22:46.628 Nvme10n1 : 1.00 195.90 12.24 63.97 0.00 201715.40 19223.89 251658.24 00:22:46.628 [2024-11-20T06:37:04.838Z] =================================================================================================================== 00:22:46.628 [2024-11-20T06:37:04.838Z] Total : 1721.58 107.60 655.13 0.00 243021.17 5789.01 279620.27 00:22:46.628 [2024-11-20 07:37:04.765611] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:46.628 [2024-11-20 07:37:04.765663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:46.628 1721.58 IOPS, 107.60 MiB/s [2024-11-20T06:37:04.838Z] [2024-11-20 07:37:04.765994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.628 [2024-11-20 07:37:04.766014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cb1b0 with addr=10.0.0.2, port=4420 00:22:46.628 [2024-11-20 07:37:04.766025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb1b0 is same with the state(6) to be set 00:22:46.628 [2024-11-20 07:37:04.766362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.628 [2024-11-20 07:37:04.766373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf4a10 with addr=10.0.0.2, port=4420 00:22:46.628 [2024-11-20 07:37:04.766380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf4a10 is same with the state(6) to be set 00:22:46.628 [2024-11-20 07:37:04.766675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.628 [2024-11-20 07:37:04.766685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6cd0 with addr=10.0.0.2, port=4420 00:22:46.628 [2024-11-20 07:37:04.766692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf6cd0 is same with the state(6) to be set 00:22:46.628 [2024-11-20 07:37:04.766928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.628 [2024-11-20 07:37:04.766939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdfb0a0 with addr=10.0.0.2, port=4420 00:22:46.628 [2024-11-20 07:37:04.766946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfb0a0 is same with the state(6) to be set 00:22:46.628 [2024-11-20 07:37:04.766961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:46.628 [2024-11-20 07:37:04.766968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:46.628 [2024-11-20 07:37:04.766978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:46.628 [2024-11-20 07:37:04.766987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:46.628 [2024-11-20 07:37:04.766997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:46.628 [2024-11-20 07:37:04.767004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:46.628 [2024-11-20 07:37:04.767011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:46.628 [2024-11-20 07:37:04.767018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:46.628 [2024-11-20 07:37:04.767050] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:46.628 [2024-11-20 07:37:04.767062] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:22:46.628 [2024-11-20 07:37:04.767073] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:22:46.628 [2024-11-20 07:37:04.767104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdfb0a0 (9): Bad file descriptor 00:22:46.628 [2024-11-20 07:37:04.767120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf6cd0 (9): Bad file descriptor 00:22:46.628 [2024-11-20 07:37:04.767134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf4a10 (9): Bad file descriptor 00:22:46.628 [2024-11-20 07:37:04.767147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cb1b0 (9): Bad file descriptor 00:22:46.628 [2024-11-20 07:37:04.768534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:46.628 [2024-11-20 07:37:04.768550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:46.628 [2024-11-20 07:37:04.768562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:46.628 [2024-11-20 07:37:04.768964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.628 [2024-11-20 07:37:04.768979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe43ea0 with addr=10.0.0.2, port=4420 00:22:46.628 [2024-11-20 07:37:04.768987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43ea0 is same with the state(6) to be set 00:22:46.628 [2024-11-20 07:37:04.769016] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:46.628 [2024-11-20 07:37:04.769027] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:46.628 [2024-11-20 07:37:04.769038] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:22:46.628 [2024-11-20 07:37:04.769048] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:22:46.628 [2024-11-20 07:37:04.769060] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:22:46.628 [2024-11-20 07:37:04.769070] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:46.628 [2024-11-20 07:37:04.769126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:46.628 [2024-11-20 07:37:04.769140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:46.628 [2024-11-20 07:37:04.769518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.628 [2024-11-20 07:37:04.769532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9befd0 with addr=10.0.0.2, port=4420 00:22:46.628 [2024-11-20 07:37:04.769539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9befd0 is same with the state(6) to be set 00:22:46.628 [2024-11-20 07:37:04.769723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.628 [2024-11-20 07:37:04.769733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ca270 with addr=10.0.0.2, port=4420 00:22:46.628 [2024-11-20 07:37:04.769740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ca270 is same with the state(6) to be set 00:22:46.628 [2024-11-20 07:37:04.770093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.628 [2024-11-20 07:37:04.770103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2aba0 with addr=10.0.0.2, port=4420 00:22:46.628 [2024-11-20 07:37:04.770110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2aba0 is same with the state(6) to be set 00:22:46.628 [2024-11-20 07:37:04.770120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe43ea0 (9): Bad file descriptor 00:22:46.628 [2024-11-20 07:37:04.770129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:46.628 [2024-11-20 07:37:04.770136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:46.628 [2024-11-20 07:37:04.770143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:46.628 [2024-11-20 07:37:04.770150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:46.628 [2024-11-20 07:37:04.770158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:46.628 [2024-11-20 07:37:04.770164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:46.628 [2024-11-20 07:37:04.770172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:46.628 [2024-11-20 07:37:04.770178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:46.628 [2024-11-20 07:37:04.770185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:46.628 [2024-11-20 07:37:04.770191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:46.628 [2024-11-20 07:37:04.770198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:46.628 [2024-11-20 07:37:04.770204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:46.628 [2024-11-20 07:37:04.770211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:46.628 [2024-11-20 07:37:04.770218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:46.628 [2024-11-20 07:37:04.770225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:46.628 [2024-11-20 07:37:04.770231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:46.628 [2024-11-20 07:37:04.770644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.628 [2024-11-20 07:37:04.770655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8e1610 with addr=10.0.0.2, port=4420 00:22:46.628 [2024-11-20 07:37:04.770666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1610 is same with the state(6) to be set 00:22:46.629 [2024-11-20 07:37:04.771003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.629 [2024-11-20 07:37:04.771014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cad30 with addr=10.0.0.2, port=4420 00:22:46.629 [2024-11-20 07:37:04.771021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cad30 is same with the state(6) to be set 00:22:46.629 [2024-11-20 07:37:04.771030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9befd0 (9): Bad file descriptor 00:22:46.629 [2024-11-20 07:37:04.771040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ca270 (9): Bad file descriptor 00:22:46.629 [2024-11-20 07:37:04.771049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2aba0 (9): Bad file descriptor 00:22:46.629 [2024-11-20 07:37:04.771058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:46.629 [2024-11-20 07:37:04.771064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:46.629 [2024-11-20 07:37:04.771071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:46.629 [2024-11-20 07:37:04.771078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:46.629 [2024-11-20 07:37:04.771104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e1610 (9): Bad file descriptor 00:22:46.629 [2024-11-20 07:37:04.771115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cad30 (9): Bad file descriptor 00:22:46.629 [2024-11-20 07:37:04.771123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:46.629 [2024-11-20 07:37:04.771129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:46.629 [2024-11-20 07:37:04.771136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:46.629 [2024-11-20 07:37:04.771142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:46.629 [2024-11-20 07:37:04.771150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:46.629 [2024-11-20 07:37:04.771157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:46.629 [2024-11-20 07:37:04.771164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:46.629 [2024-11-20 07:37:04.771170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:46.629 [2024-11-20 07:37:04.771177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:46.629 [2024-11-20 07:37:04.771183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:46.629 [2024-11-20 07:37:04.771190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:46.629 [2024-11-20 07:37:04.771196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:46.629 [2024-11-20 07:37:04.771222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:46.629 [2024-11-20 07:37:04.771229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:46.629 [2024-11-20 07:37:04.771236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:46.629 [2024-11-20 07:37:04.771242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:46.629 [2024-11-20 07:37:04.771252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:46.629 [2024-11-20 07:37:04.771259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:46.629 [2024-11-20 07:37:04.771266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:46.629 [2024-11-20 07:37:04.771272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:46.891 07:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3467166 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3467166 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3467166 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.835 rmmod nvme_tcp 00:22:47.835 rmmod nvme_fabrics 00:22:47.835 rmmod nvme_keyring 00:22:47.835 07:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3466829 ']' 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3466829 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3466829 ']' 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3466829 00:22:47.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3466829) - No such process 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3466829 is not found' 00:22:47.835 Process with pid 3466829 is not found 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.835 07:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.383 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:50.383 00:22:50.383 real 0m7.873s 00:22:50.383 user 0m19.357s 00:22:50.383 sys 0m1.271s 00:22:50.383 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:50.383 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.383 ************************************ 00:22:50.383 END TEST nvmf_shutdown_tc3 00:22:50.383 ************************************ 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:50.384 ************************************ 00:22:50.384 START TEST nvmf_shutdown_tc4 00:22:50.384 ************************************ 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:50.384 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:50.384 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:50.384 Found net devices under 0000:31:00.0: cvl_0_0 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:50.384 Found net devices under 0000:31:00.1: cvl_0_1 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:50.384 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:50.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:22:50.385 00:22:50.385 --- 10.0.0.2 ping statistics --- 00:22:50.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.385 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:22:50.385 00:22:50.385 --- 10.0.0.1 ping statistics --- 00:22:50.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.385 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3468624 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3468624 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 3468624 ']' 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:50.385 07:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:50.646 [2024-11-20 07:37:08.645608] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:22:50.646 [2024-11-20 07:37:08.645681] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.646 [2024-11-20 07:37:08.748949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:50.646 [2024-11-20 07:37:08.801432] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.646 [2024-11-20 07:37:08.801484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.646 [2024-11-20 07:37:08.801492] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.646 [2024-11-20 07:37:08.801500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.646 [2024-11-20 07:37:08.801506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.647 [2024-11-20 07:37:08.803935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.647 [2024-11-20 07:37:08.804196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.647 [2024-11-20 07:37:08.804358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:50.647 [2024-11-20 07:37:08.804360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.589 [2024-11-20 07:37:09.493688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.589 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.589 Malloc1 00:22:51.589 [2024-11-20 07:37:09.606494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.589 Malloc2 00:22:51.589 Malloc3 00:22:51.589 Malloc4 00:22:51.589 Malloc5 00:22:51.589 Malloc6 00:22:51.850 Malloc7 00:22:51.850 Malloc8 00:22:51.850 Malloc9 00:22:51.850 Malloc10 00:22:51.850 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.850 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:51.850 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:51.850 07:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.850 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3468970 00:22:51.850 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:51.850 07:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:52.111 [2024-11-20 07:37:10.099826] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:57.399 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.399 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3468624 00:22:57.399 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3468624 ']' 00:22:57.399 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3468624 00:22:57.399 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:22:57.399 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:57.399 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3468624 00:22:57.399 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:57.399 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:57.399 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3468624' 00:22:57.399 killing process with pid 3468624 00:22:57.399 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 3468624 00:22:57.399 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 3468624 00:22:57.399 [2024-11-20 07:37:15.096457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb58a90 is same with the state(6) to be set 00:22:57.399 [2024-11-20 07:37:15.096499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb58a90 is same with the state(6) to be set 00:22:57.399 [2024-11-20 07:37:15.096505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb58a90 is same with the state(6) to be set 00:22:57.399 [2024-11-20 07:37:15.096510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb58a90 is same with the state(6) to be set 00:22:57.399 [2024-11-20 07:37:15.096516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb58a90 is same with the state(6) to be set 00:22:57.399 [2024-11-20 07:37:15.096778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb58f60 is same with the state(6) to be set 00:22:57.399 [2024-11-20 07:37:15.096803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb58f60 is same with the state(6) to be set 00:22:57.399 [2024-11-20 07:37:15.096810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb58f60 is same with the state(6) to be set 00:22:57.399 [2024-11-20 07:37:15.097424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb585c0 is same with the state(6) to be set 00:22:57.400 [2024-11-20 07:37:15.097453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb585c0 is same with the state(6) to be set 00:22:57.400 [2024-11-20 07:37:15.097462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb585c0 is same with the state(6) to be set 00:22:57.400 [2024-11-20 07:37:15.097469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb585c0 is same with the state(6) to be set 00:22:57.400 [2024-11-20 07:37:15.097477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb585c0 is same with the state(6) to be set 00:22:57.400 [2024-11-20 07:37:15.097484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb585c0 is same with the state(6) to be set 00:22:57.400 [2024-11-20 07:37:15.097491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb585c0 is same with the state(6) to be set 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 [2024-11-20 07:37:15.101872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 [2024-11-20 07:37:15.102846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 Write completed with error (sct=0, sc=8) 00:22:57.400 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 [2024-11-20 07:37:15.103760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 [2024-11-20 07:37:15.104569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb2f60 is same with tstarting I/O failed: -6 00:22:57.401 he state(6) to be set 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 [2024-11-20 07:37:15.104590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb2f60 is same with the state(6) to be set 00:22:57.401 starting I/O failed: -6 00:22:57.401 [2024-11-20 07:37:15.104596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb2f60 is same with the state(6) to be set 00:22:57.401 [2024-11-20 07:37:15.104602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb2f60 is same with tWrite completed with error (sct=0, sc=8) 00:22:57.401 he state(6) to be set 00:22:57.401 [2024-11-20 07:37:15.104609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb2f60 is same with the state(6) to be set 00:22:57.401 starting I/O failed: -6 00:22:57.401 [2024-11-20 07:37:15.104614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb2f60 is same with the state(6) to be set 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 [2024-11-20 07:37:15.104619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb2f60 is same with the state(6) to be set 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 [2024-11-20 07:37:15.104965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb25a0 is same with the state(6) to be set 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 [2024-11-20 07:37:15.104986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb25a0 is same with the state(6) to be set 00:22:57.401 [2024-11-20 07:37:15.104994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb25a0 is same with the state(6) to be set 00:22:57.401 [2024-11-20 07:37:15.105002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb25a0 is same with the state(6) to be set 00:22:57.401 [2024-11-20 07:37:15.105009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb25a0 is same with the state(6) to be set 00:22:57.401 [2024-11-20 07:37:15.105015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb25a0 is same with the state(6) to be set 00:22:57.401 [2024-11-20 07:37:15.105148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.401 NVMe io qpair process completion error 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 Write completed with error (sct=0, sc=8) 00:22:57.401 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 [2024-11-20 07:37:15.106361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.402 starting I/O failed: -6 00:22:57.402 starting I/O failed: -6 00:22:57.402 [2024-11-20 07:37:15.106537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4300 is same with the state(6) to be set 00:22:57.402 [2024-11-20 07:37:15.106551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4300 is same with the state(6) to be set 00:22:57.402 [2024-11-20 07:37:15.106556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4300 is same with the state(6) to be set 00:22:57.402 [2024-11-20 07:37:15.106561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4300 is same with the state(6) to be set 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 [2024-11-20 07:37:15.106786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb47d0 is same with the state(6) to be set 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 [2024-11-20 07:37:15.106806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb47d0 is same with the state(6) to be set 00:22:57.402 [2024-11-20 07:37:15.106813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb47d0 is same with the state(6) to be set 00:22:57.402 [2024-11-20 07:37:15.106818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb47d0 is same with the state(6) to be set 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 [2024-11-20 07:37:15.107302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 [2024-11-20 07:37:15.108221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.402 starting I/O failed: -6 00:22:57.402 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 [2024-11-20 07:37:15.109633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.403 NVMe io qpair process completion error 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 [2024-11-20 07:37:15.110984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 [2024-11-20 07:37:15.111830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 Write completed with error (sct=0, sc=8) 00:22:57.403 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 [2024-11-20 07:37:15.112775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.404 starting I/O failed: -6 00:22:57.404 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 [2024-11-20 07:37:15.114417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.405 NVMe io qpair process completion error 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 [2024-11-20 07:37:15.115755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 [2024-11-20 07:37:15.116585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.405 starting I/O failed: -6 00:22:57.405 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 [2024-11-20 07:37:15.117498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 [2024-11-20 07:37:15.119984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.406 NVMe io qpair process completion error 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 [2024-11-20 07:37:15.121231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 Write completed with error (sct=0, sc=8) 00:22:57.406 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 [2024-11-20 07:37:15.122092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 [2024-11-20 07:37:15.123030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.407 Write completed with error (sct=0, sc=8) 00:22:57.407 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 [2024-11-20 07:37:15.124893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.408 NVMe io qpair process completion error 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 [2024-11-20 07:37:15.126052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 [2024-11-20 07:37:15.126892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 Write completed with error (sct=0, sc=8) 00:22:57.408 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 [2024-11-20 07:37:15.127846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.409 starting I/O failed: -6 00:22:57.409 starting I/O failed: -6 00:22:57.409 starting I/O failed: -6 00:22:57.409 starting I/O failed: -6 00:22:57.409 starting I/O failed: -6 00:22:57.409 starting I/O failed: -6 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 [2024-11-20 07:37:15.130565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.409 NVMe io qpair process completion error 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 [2024-11-20 07:37:15.131795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.409 starting I/O failed: -6 00:22:57.409 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 [2024-11-20 07:37:15.132610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 [2024-11-20 07:37:15.133559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.410 Write completed with error (sct=0, sc=8) 00:22:57.410 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 [2024-11-20 07:37:15.135008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.411 NVMe io qpair process completion error 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 [2024-11-20 07:37:15.136355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 [2024-11-20 07:37:15.137159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.411 starting I/O failed: -6 00:22:57.411 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 [2024-11-20 07:37:15.138102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 [2024-11-20 07:37:15.139710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.412 NVMe io qpair process completion error 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 [2024-11-20 07:37:15.140849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 Write completed with error (sct=0, sc=8) 00:22:57.412 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 [2024-11-20 07:37:15.141771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 [2024-11-20 07:37:15.142687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.413 starting I/O failed: -6 00:22:57.413 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 [2024-11-20 07:37:15.145168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.414 NVMe io qpair process completion error 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 [2024-11-20 07:37:15.146319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 [2024-11-20 07:37:15.147217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.414 starting I/O failed: -6 00:22:57.414 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 [2024-11-20 07:37:15.148149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 Write completed with error (sct=0, sc=8) 00:22:57.415 starting I/O failed: -6 00:22:57.415 [2024-11-20 07:37:15.150036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.415 NVMe io qpair process completion error 00:22:57.415 Initializing NVMe Controllers 00:22:57.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:57.415 Controller IO queue size 128, less than required. 00:22:57.415 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:57.415 Controller IO queue size 128, less than required. 00:22:57.415 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:57.415 Controller IO queue size 128, less than required. 00:22:57.415 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:57.415 Controller IO queue size 128, less than required. 00:22:57.415 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:57.415 Controller IO queue size 128, less than required. 00:22:57.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:57.416 Controller IO queue size 128, less than required. 00:22:57.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:57.416 Controller IO queue size 128, less than required. 00:22:57.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:57.416 Controller IO queue size 128, less than required. 00:22:57.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:57.416 Controller IO queue size 128, less than required. 00:22:57.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:57.416 Controller IO queue size 128, less than required. 00:22:57.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:57.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:57.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:57.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:57.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:57.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:57.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:57.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:57.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:57.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:57.416 Initialization complete. Launching workers. 00:22:57.416 ======================================================== 00:22:57.416 Latency(us) 00:22:57.416 Device Information : IOPS MiB/s Average min max 00:22:57.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1963.65 84.38 65204.00 675.58 121867.51 00:22:57.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1891.17 81.26 67738.66 656.66 150231.88 00:22:57.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1905.96 81.90 67234.26 823.77 126232.97 00:22:57.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1921.81 82.58 66699.65 927.03 119048.31 00:22:57.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1867.29 80.24 68681.94 930.82 130348.41 00:22:57.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1861.59 79.99 68204.29 884.24 116667.80 00:22:57.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1914.83 82.28 66327.21 705.95 119690.90 00:22:57.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1872.79 80.47 67838.12 846.35 118056.19 00:22:57.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1897.09 81.52 66993.75 851.11 118741.26 00:22:57.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1906.59 81.92 66693.69 812.11 119595.78 00:22:57.416 ======================================================== 00:22:57.416 Total : 19002.76 816.52 67147.74 656.66 150231.88 00:22:57.416 00:22:57.416 [2024-11-20 07:37:15.152773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x804060 is same with the state(6) to be set 00:22:57.416 [2024-11-20 07:37:15.152821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x806540 is same with the state(6) to be set 00:22:57.416 [2024-11-20 07:37:15.152850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x805050 is same with the state(6) to be set 00:22:57.416 [2024-11-20 07:37:15.152879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x805380 is same with the state(6) to be set 00:22:57.416 [2024-11-20 07:37:15.152907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8059e0 is same with the state(6) to be set 00:22:57.416 [2024-11-20 07:37:15.152934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x804390 is same with the state(6) to be set 00:22:57.416 [2024-11-20 07:37:15.152964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x806360 is same with the state(6) to be set 00:22:57.416 [2024-11-20 07:37:15.152993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8049f0 is same with the state(6) to be set 00:22:57.416 [2024-11-20 07:37:15.153021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8056b0 is same with the state(6) to be set 00:22:57.416 [2024-11-20 07:37:15.153049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8046c0 is same with the state(6) to be set 00:22:57.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:57.416 07:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3468970 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3468970 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3468970 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:58.380 rmmod nvme_tcp 00:22:58.380 rmmod nvme_fabrics 00:22:58.380 rmmod nvme_keyring 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3468624 ']' 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3468624 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3468624 ']' 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3468624 00:22:58.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3468624) - No such process 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3468624 is not found' 00:22:58.380 Process with pid 3468624 is not found 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:58.380 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:58.381 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:58.381 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:58.381 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:58.381 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:58.381 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.381 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.381 07:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:00.927 00:23:00.927 real 0m10.340s 00:23:00.927 user 0m28.007s 00:23:00.927 sys 0m3.994s 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:00.927 ************************************ 00:23:00.927 END TEST nvmf_shutdown_tc4 00:23:00.927 ************************************ 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:00.927 00:23:00.927 real 0m44.230s 00:23:00.927 user 1m47.760s 00:23:00.927 sys 0m14.089s 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:00.927 ************************************ 00:23:00.927 END TEST nvmf_shutdown 00:23:00.927 ************************************ 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:00.927 ************************************ 00:23:00.927 START TEST nvmf_nsid 00:23:00.927 ************************************ 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:00.927 * Looking for test storage... 00:23:00.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:00.927 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:00.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.928 --rc genhtml_branch_coverage=1 00:23:00.928 --rc genhtml_function_coverage=1 00:23:00.928 --rc genhtml_legend=1 00:23:00.928 --rc geninfo_all_blocks=1 00:23:00.928 --rc geninfo_unexecuted_blocks=1 00:23:00.928 00:23:00.928 ' 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:00.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.928 --rc genhtml_branch_coverage=1 00:23:00.928 --rc genhtml_function_coverage=1 00:23:00.928 --rc genhtml_legend=1 00:23:00.928 --rc geninfo_all_blocks=1 00:23:00.928 --rc geninfo_unexecuted_blocks=1 00:23:00.928 00:23:00.928 ' 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:00.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.928 --rc genhtml_branch_coverage=1 00:23:00.928 --rc genhtml_function_coverage=1 00:23:00.928 --rc genhtml_legend=1 00:23:00.928 --rc geninfo_all_blocks=1 00:23:00.928 --rc geninfo_unexecuted_blocks=1 00:23:00.928 00:23:00.928 ' 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:00.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.928 --rc genhtml_branch_coverage=1 00:23:00.928 --rc genhtml_function_coverage=1 00:23:00.928 --rc genhtml_legend=1 00:23:00.928 --rc geninfo_all_blocks=1 00:23:00.928 --rc geninfo_unexecuted_blocks=1 00:23:00.928 00:23:00.928 ' 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:00.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.928 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:00.929 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:00.929 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:00.929 07:37:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:09.072 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.072 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:09.073 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:09.073 Found net devices under 0000:31:00.0: cvl_0_0 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:09.073 Found net devices under 0000:31:00.1: cvl_0_1 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:09.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:23:09.073 00:23:09.073 --- 10.0.0.2 ping statistics --- 00:23:09.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.073 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:23:09.073 00:23:09.073 --- 10.0.0.1 ping statistics --- 00:23:09.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.073 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3474403 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3474403 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3474403 ']' 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:09.073 07:37:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.073 [2024-11-20 07:37:26.573573] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:23:09.073 [2024-11-20 07:37:26.573637] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.073 [2024-11-20 07:37:26.672773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.073 [2024-11-20 07:37:26.723577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.073 [2024-11-20 07:37:26.723626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.073 [2024-11-20 07:37:26.723634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.073 [2024-11-20 07:37:26.723642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.073 [2024-11-20 07:37:26.723648] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.073 [2024-11-20 07:37:26.724484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3474457 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=04161e22-3117-4dc1-8742-59d2a4877aac 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=f1501019-fe6f-49ad-9124-d242a8bcde51 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=16187090-5f8c-47eb-b057-6e52ae166998 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.335 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.335 null0 00:23:09.335 null1 00:23:09.335 null2 00:23:09.335 [2024-11-20 07:37:27.490340] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:23:09.335 [2024-11-20 07:37:27.490406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474457 ] 00:23:09.335 [2024-11-20 07:37:27.492163] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.335 [2024-11-20 07:37:27.516463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.596 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.596 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3474457 /var/tmp/tgt2.sock 00:23:09.596 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3474457 ']' 00:23:09.596 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:09.596 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:09.596 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:09.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:09.596 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:09.596 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.596 [2024-11-20 07:37:27.583435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.596 [2024-11-20 07:37:27.636179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.858 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:09.858 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:09.858 07:37:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:10.119 [2024-11-20 07:37:28.201202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.119 [2024-11-20 07:37:28.217396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:10.119 nvme0n1 nvme0n2 00:23:10.119 nvme1n1 00:23:10.119 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:10.119 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:10.119 07:37:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:11.503 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:11.503 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:11.503 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:11.503 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:11.503 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:23:11.503 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:11.503 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:11.503 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:11.503 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:11.503 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:11.763 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:23:11.763 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:23:11.763 07:37:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 04161e22-3117-4dc1-8742-59d2a4877aac 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=04161e2231174dc1874259d2a4877aac 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 04161E2231174DC1874259D2A4877AAC 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 04161E2231174DC1874259D2A4877AAC == \0\4\1\6\1\E\2\2\3\1\1\7\4\D\C\1\8\7\4\2\5\9\D\2\A\4\8\7\7\A\A\C ]] 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid f1501019-fe6f-49ad-9124-d242a8bcde51 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f1501019fe6f49ad9124d242a8bcde51 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F1501019FE6F49AD9124D242A8BCDE51 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ F1501019FE6F49AD9124D242A8BCDE51 == \F\1\5\0\1\0\1\9\F\E\6\F\4\9\A\D\9\1\2\4\D\2\4\2\A\8\B\C\D\E\5\1 ]] 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:12.706 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:12.707 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:23:12.707 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:12.707 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:23:12.707 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:12.967 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 16187090-5f8c-47eb-b057-6e52ae166998 00:23:12.967 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:12.967 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:12.967 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:12.967 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:12.967 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:12.967 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=161870905f8c47ebb0576e52ae166998 00:23:12.967 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 161870905F8C47EBB0576E52AE166998 00:23:12.967 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 161870905F8C47EBB0576E52AE166998 == \1\6\1\8\7\0\9\0\5\F\8\C\4\7\E\B\B\0\5\7\6\E\5\2\A\E\1\6\6\9\9\8 ]] 00:23:12.967 07:37:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:12.967 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:12.967 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:12.967 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3474457 00:23:12.967 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3474457 ']' 00:23:12.967 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3474457 00:23:12.967 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:12.967 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:13.229 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3474457 00:23:13.229 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:13.229 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:13.229 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3474457' 00:23:13.229 killing process with pid 3474457 00:23:13.229 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3474457 00:23:13.229 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3474457 00:23:13.229 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:13.229 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:13.229 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:13.229 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:13.229 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:13.229 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:13.229 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:13.490 rmmod nvme_tcp 00:23:13.490 rmmod nvme_fabrics 00:23:13.490 rmmod nvme_keyring 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3474403 ']' 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3474403 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3474403 ']' 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3474403 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3474403 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3474403' 00:23:13.490 killing process with pid 3474403 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3474403 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3474403 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.490 07:37:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.037 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:16.037 00:23:16.037 real 0m15.099s 00:23:16.037 user 0m11.435s 00:23:16.037 sys 0m7.005s 00:23:16.037 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:16.037 07:37:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:16.037 ************************************ 00:23:16.037 END TEST nvmf_nsid 00:23:16.037 ************************************ 00:23:16.037 07:37:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:16.037 00:23:16.037 real 13m7.468s 00:23:16.037 user 27m17.636s 00:23:16.037 sys 3m56.769s 00:23:16.037 07:37:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:16.037 07:37:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:16.037 ************************************ 00:23:16.037 END TEST nvmf_target_extra 00:23:16.037 ************************************ 00:23:16.037 07:37:33 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:16.037 07:37:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:16.037 07:37:33 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:16.037 07:37:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.037 ************************************ 00:23:16.037 START TEST nvmf_host 00:23:16.037 ************************************ 00:23:16.037 07:37:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:16.037 * Looking for test storage... 00:23:16.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:16.037 07:37:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:16.038 07:37:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:16.038 07:37:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:16.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.038 --rc genhtml_branch_coverage=1 00:23:16.038 --rc genhtml_function_coverage=1 00:23:16.038 --rc genhtml_legend=1 00:23:16.038 --rc geninfo_all_blocks=1 00:23:16.038 --rc geninfo_unexecuted_blocks=1 00:23:16.038 00:23:16.038 ' 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:16.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.038 --rc genhtml_branch_coverage=1 00:23:16.038 --rc genhtml_function_coverage=1 00:23:16.038 --rc genhtml_legend=1 00:23:16.038 --rc geninfo_all_blocks=1 00:23:16.038 --rc geninfo_unexecuted_blocks=1 00:23:16.038 00:23:16.038 ' 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:16.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.038 --rc genhtml_branch_coverage=1 00:23:16.038 --rc genhtml_function_coverage=1 00:23:16.038 --rc genhtml_legend=1 00:23:16.038 --rc geninfo_all_blocks=1 00:23:16.038 --rc geninfo_unexecuted_blocks=1 00:23:16.038 00:23:16.038 ' 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:16.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.038 --rc genhtml_branch_coverage=1 00:23:16.038 --rc genhtml_function_coverage=1 00:23:16.038 --rc genhtml_legend=1 00:23:16.038 --rc geninfo_all_blocks=1 00:23:16.038 --rc geninfo_unexecuted_blocks=1 00:23:16.038 00:23:16.038 ' 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:16.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.038 ************************************ 00:23:16.038 START TEST nvmf_multicontroller 00:23:16.038 ************************************ 00:23:16.038 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:16.301 * Looking for test storage... 00:23:16.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:16.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.301 --rc genhtml_branch_coverage=1 00:23:16.301 --rc genhtml_function_coverage=1 00:23:16.301 --rc genhtml_legend=1 00:23:16.301 --rc geninfo_all_blocks=1 00:23:16.301 --rc geninfo_unexecuted_blocks=1 00:23:16.301 00:23:16.301 ' 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:16.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.301 --rc genhtml_branch_coverage=1 00:23:16.301 --rc genhtml_function_coverage=1 00:23:16.301 --rc genhtml_legend=1 00:23:16.301 --rc geninfo_all_blocks=1 00:23:16.301 --rc geninfo_unexecuted_blocks=1 00:23:16.301 00:23:16.301 ' 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:16.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.301 --rc genhtml_branch_coverage=1 00:23:16.301 --rc genhtml_function_coverage=1 00:23:16.301 --rc genhtml_legend=1 00:23:16.301 --rc geninfo_all_blocks=1 00:23:16.301 --rc geninfo_unexecuted_blocks=1 00:23:16.301 00:23:16.301 ' 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:16.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.301 --rc genhtml_branch_coverage=1 00:23:16.301 --rc genhtml_function_coverage=1 00:23:16.301 --rc genhtml_legend=1 00:23:16.301 --rc geninfo_all_blocks=1 00:23:16.301 --rc geninfo_unexecuted_blocks=1 00:23:16.301 00:23:16.301 ' 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.301 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:16.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:16.302 07:37:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:24.447 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:24.447 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:24.447 Found net devices under 0000:31:00.0: cvl_0_0 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:24.447 Found net devices under 0000:31:00.1: cvl_0_1 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.447 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:24.448 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:24.448 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.448 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.448 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.448 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.448 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:24.448 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.448 07:37:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:24.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:23:24.448 00:23:24.448 --- 10.0.0.2 ping statistics --- 00:23:24.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.448 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:23:24.448 00:23:24.448 --- 10.0.0.1 ping statistics --- 00:23:24.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.448 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3479674 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3479674 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3479674 ']' 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:24.448 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.448 [2024-11-20 07:37:42.140590] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:23:24.448 [2024-11-20 07:37:42.140657] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.448 [2024-11-20 07:37:42.240549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:24.448 [2024-11-20 07:37:42.293673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.448 [2024-11-20 07:37:42.293724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.448 [2024-11-20 07:37:42.293732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.448 [2024-11-20 07:37:42.293739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.448 [2024-11-20 07:37:42.293754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.448 [2024-11-20 07:37:42.295648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.448 [2024-11-20 07:37:42.295811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.448 [2024-11-20 07:37:42.295812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.023 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:25.023 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:25.023 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:25.023 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:25.023 07:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.023 [2024-11-20 07:37:43.022221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.023 Malloc0 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.023 [2024-11-20 07:37:43.094974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.023 [2024-11-20 07:37:43.106876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.023 Malloc1 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3479919 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3479919 /var/tmp/bdevperf.sock 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3479919 ']' 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:25.023 07:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.967 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:25.967 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.968 NVMe0n1 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.968 1 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.968 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.230 request: 00:23:26.230 { 00:23:26.230 "name": "NVMe0", 00:23:26.230 "trtype": "tcp", 00:23:26.230 "traddr": "10.0.0.2", 00:23:26.230 "adrfam": "ipv4", 00:23:26.230 "trsvcid": "4420", 00:23:26.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.230 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:26.230 "hostaddr": "10.0.0.1", 00:23:26.230 "prchk_reftag": false, 00:23:26.230 "prchk_guard": false, 00:23:26.230 "hdgst": false, 00:23:26.230 "ddgst": false, 00:23:26.230 "allow_unrecognized_csi": false, 00:23:26.230 "method": "bdev_nvme_attach_controller", 00:23:26.230 "req_id": 1 00:23:26.230 } 00:23:26.230 Got JSON-RPC error response 00:23:26.230 response: 00:23:26.230 { 00:23:26.230 "code": -114, 00:23:26.230 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.230 } 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.230 request: 00:23:26.230 { 00:23:26.230 "name": "NVMe0", 00:23:26.230 "trtype": "tcp", 00:23:26.230 "traddr": "10.0.0.2", 00:23:26.230 "adrfam": "ipv4", 00:23:26.230 "trsvcid": "4420", 00:23:26.230 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:26.230 "hostaddr": "10.0.0.1", 00:23:26.230 "prchk_reftag": false, 00:23:26.230 "prchk_guard": false, 00:23:26.230 "hdgst": false, 00:23:26.230 "ddgst": false, 00:23:26.230 "allow_unrecognized_csi": false, 00:23:26.230 "method": "bdev_nvme_attach_controller", 00:23:26.230 "req_id": 1 00:23:26.230 } 00:23:26.230 Got JSON-RPC error response 00:23:26.230 response: 00:23:26.230 { 00:23:26.230 "code": -114, 00:23:26.230 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.230 } 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.230 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.231 request: 00:23:26.231 { 00:23:26.231 "name": "NVMe0", 00:23:26.231 "trtype": "tcp", 00:23:26.231 "traddr": "10.0.0.2", 00:23:26.231 "adrfam": "ipv4", 00:23:26.231 "trsvcid": "4420", 00:23:26.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.231 "hostaddr": "10.0.0.1", 00:23:26.231 "prchk_reftag": false, 00:23:26.231 "prchk_guard": false, 00:23:26.231 "hdgst": false, 00:23:26.231 "ddgst": false, 00:23:26.231 "multipath": "disable", 00:23:26.231 "allow_unrecognized_csi": false, 00:23:26.231 "method": "bdev_nvme_attach_controller", 00:23:26.231 "req_id": 1 00:23:26.231 } 00:23:26.231 Got JSON-RPC error response 00:23:26.231 response: 00:23:26.231 { 00:23:26.231 "code": -114, 00:23:26.231 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:26.231 } 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.231 request: 00:23:26.231 { 00:23:26.231 "name": "NVMe0", 00:23:26.231 "trtype": "tcp", 00:23:26.231 "traddr": "10.0.0.2", 00:23:26.231 "adrfam": "ipv4", 00:23:26.231 "trsvcid": "4420", 00:23:26.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.231 "hostaddr": "10.0.0.1", 00:23:26.231 "prchk_reftag": false, 00:23:26.231 "prchk_guard": false, 00:23:26.231 "hdgst": false, 00:23:26.231 "ddgst": false, 00:23:26.231 "multipath": "failover", 00:23:26.231 "allow_unrecognized_csi": false, 00:23:26.231 "method": "bdev_nvme_attach_controller", 00:23:26.231 "req_id": 1 00:23:26.231 } 00:23:26.231 Got JSON-RPC error response 00:23:26.231 response: 00:23:26.231 { 00:23:26.231 "code": -114, 00:23:26.231 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.231 } 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.231 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.492 NVMe0n1 00:23:26.492 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.492 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.492 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.492 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.492 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.492 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:26.492 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.492 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.492 00:23:26.492 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.492 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.492 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:26.492 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.492 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.492 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.492 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:26.492 07:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:27.881 { 00:23:27.881 "results": [ 00:23:27.881 { 00:23:27.881 "job": "NVMe0n1", 00:23:27.881 "core_mask": "0x1", 00:23:27.881 "workload": "write", 00:23:27.881 "status": "finished", 00:23:27.881 "queue_depth": 128, 00:23:27.881 "io_size": 4096, 00:23:27.881 "runtime": 1.005267, 00:23:27.881 "iops": 23313.209326477445, 00:23:27.881 "mibps": 91.06722393155252, 00:23:27.881 "io_failed": 0, 00:23:27.881 "io_timeout": 0, 00:23:27.881 "avg_latency_us": 5478.469909540877, 00:23:27.881 "min_latency_us": 2102.6133333333332, 00:23:27.881 "max_latency_us": 15400.96 00:23:27.881 } 00:23:27.881 ], 00:23:27.881 "core_count": 1 00:23:27.881 } 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3479919 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3479919 ']' 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3479919 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3479919 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3479919' 00:23:27.881 killing process with pid 3479919 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3479919 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3479919 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:27.881 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:23:27.882 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:27.882 [2024-11-20 07:37:43.236862] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:23:27.882 [2024-11-20 07:37:43.236939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3479919 ] 00:23:27.882 [2024-11-20 07:37:43.332779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.882 [2024-11-20 07:37:43.385176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.882 [2024-11-20 07:37:44.520964] bdev.c:4753:bdev_name_add: *ERROR*: Bdev name 8e3e0873-218a-4c62-8a97-492160b544b6 already exists 00:23:27.882 [2024-11-20 07:37:44.521010] bdev.c:7962:bdev_register: *ERROR*: Unable to add uuid:8e3e0873-218a-4c62-8a97-492160b544b6 alias for bdev NVMe1n1 00:23:27.882 [2024-11-20 07:37:44.521020] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:27.882 Running I/O for 1 seconds... 00:23:27.882 23261.00 IOPS, 90.86 MiB/s 00:23:27.882 Latency(us) 00:23:27.882 [2024-11-20T06:37:46.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.882 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:27.882 NVMe0n1 : 1.01 23313.21 91.07 0.00 0.00 5478.47 2102.61 15400.96 00:23:27.882 [2024-11-20T06:37:46.092Z] =================================================================================================================== 00:23:27.882 [2024-11-20T06:37:46.092Z] Total : 23313.21 91.07 0.00 0.00 5478.47 2102.61 15400.96 00:23:27.882 Received shutdown signal, test time was about 1.000000 seconds 00:23:27.882 00:23:27.882 Latency(us) 00:23:27.882 [2024-11-20T06:37:46.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.882 [2024-11-20T06:37:46.092Z] =================================================================================================================== 00:23:27.882 [2024-11-20T06:37:46.092Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.882 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:27.882 rmmod nvme_tcp 00:23:27.882 rmmod nvme_fabrics 00:23:27.882 rmmod nvme_keyring 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3479674 ']' 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3479674 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3479674 ']' 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3479674 00:23:27.882 07:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:27.882 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:27.882 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3479674 00:23:27.882 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:27.882 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:27.882 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3479674' 00:23:27.882 killing process with pid 3479674 00:23:27.882 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3479674 00:23:27.882 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3479674 00:23:28.144 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:28.144 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:28.144 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:28.144 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:28.144 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:28.144 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:28.144 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:28.144 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.144 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:28.144 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.144 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.144 07:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:30.691 00:23:30.691 real 0m14.118s 00:23:30.691 user 0m16.913s 00:23:30.691 sys 0m6.656s 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.691 ************************************ 00:23:30.691 END TEST nvmf_multicontroller 00:23:30.691 ************************************ 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.691 ************************************ 00:23:30.691 START TEST nvmf_aer 00:23:30.691 ************************************ 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:30.691 * Looking for test storage... 00:23:30.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:30.691 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:30.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.692 --rc genhtml_branch_coverage=1 00:23:30.692 --rc genhtml_function_coverage=1 00:23:30.692 --rc genhtml_legend=1 00:23:30.692 --rc geninfo_all_blocks=1 00:23:30.692 --rc geninfo_unexecuted_blocks=1 00:23:30.692 00:23:30.692 ' 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:30.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.692 --rc genhtml_branch_coverage=1 00:23:30.692 --rc genhtml_function_coverage=1 00:23:30.692 --rc genhtml_legend=1 00:23:30.692 --rc geninfo_all_blocks=1 00:23:30.692 --rc geninfo_unexecuted_blocks=1 00:23:30.692 00:23:30.692 ' 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:30.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.692 --rc genhtml_branch_coverage=1 00:23:30.692 --rc genhtml_function_coverage=1 00:23:30.692 --rc genhtml_legend=1 00:23:30.692 --rc geninfo_all_blocks=1 00:23:30.692 --rc geninfo_unexecuted_blocks=1 00:23:30.692 00:23:30.692 ' 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:30.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.692 --rc genhtml_branch_coverage=1 00:23:30.692 --rc genhtml_function_coverage=1 00:23:30.692 --rc genhtml_legend=1 00:23:30.692 --rc geninfo_all_blocks=1 00:23:30.692 --rc geninfo_unexecuted_blocks=1 00:23:30.692 00:23:30.692 ' 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:30.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:30.692 07:37:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:38.927 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:38.928 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:38.928 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:38.928 Found net devices under 0000:31:00.0: cvl_0_0 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:38.928 Found net devices under 0000:31:00.1: cvl_0_1 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.928 07:37:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:38.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:23:38.928 00:23:38.928 --- 10.0.0.2 ping statistics --- 00:23:38.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.928 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:23:38.928 00:23:38.928 --- 10.0.0.1 ping statistics --- 00:23:38.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.928 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3484643 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3484643 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 3484643 ']' 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:38.928 07:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:38.928 [2024-11-20 07:37:56.375452] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:23:38.928 [2024-11-20 07:37:56.375524] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.928 [2024-11-20 07:37:56.478925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:38.928 [2024-11-20 07:37:56.533106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.928 [2024-11-20 07:37:56.533159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.928 [2024-11-20 07:37:56.533168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.928 [2024-11-20 07:37:56.533175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.928 [2024-11-20 07:37:56.533182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.928 [2024-11-20 07:37:56.535588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.928 [2024-11-20 07:37:56.535737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.928 [2024-11-20 07:37:56.535899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:38.928 [2024-11-20 07:37:56.536033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.190 [2024-11-20 07:37:57.242317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.190 Malloc0 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.190 [2024-11-20 07:37:57.317528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.190 [ 00:23:39.190 { 00:23:39.190 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:39.190 "subtype": "Discovery", 00:23:39.190 "listen_addresses": [], 00:23:39.190 "allow_any_host": true, 00:23:39.190 "hosts": [] 00:23:39.190 }, 00:23:39.190 { 00:23:39.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.190 "subtype": "NVMe", 00:23:39.190 "listen_addresses": [ 00:23:39.190 { 00:23:39.190 "trtype": "TCP", 00:23:39.190 "adrfam": "IPv4", 00:23:39.190 "traddr": "10.0.0.2", 00:23:39.190 "trsvcid": "4420" 00:23:39.190 } 00:23:39.190 ], 00:23:39.190 "allow_any_host": true, 00:23:39.190 "hosts": [], 00:23:39.190 "serial_number": "SPDK00000000000001", 00:23:39.190 "model_number": "SPDK bdev Controller", 00:23:39.190 "max_namespaces": 2, 00:23:39.190 "min_cntlid": 1, 00:23:39.190 "max_cntlid": 65519, 00:23:39.190 "namespaces": [ 00:23:39.190 { 00:23:39.190 "nsid": 1, 00:23:39.190 "bdev_name": "Malloc0", 00:23:39.190 "name": "Malloc0", 00:23:39.190 "nguid": "D512505CEF174ECA92D945C9DC928EDA", 00:23:39.190 "uuid": "d512505c-ef17-4eca-92d9-45c9dc928eda" 00:23:39.190 } 00:23:39.190 ] 00:23:39.190 } 00:23:39.190 ] 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3484994 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:23:39.190 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:39.451 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:39.451 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:23:39.451 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:23:39.451 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:39.451 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:39.451 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:39.451 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:23:39.451 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:39.451 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.451 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.451 Malloc1 00:23:39.451 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.451 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:39.451 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.451 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.451 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.452 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:39.452 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.452 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.452 Asynchronous Event Request test 00:23:39.452 Attaching to 10.0.0.2 00:23:39.452 Attached to 10.0.0.2 00:23:39.452 Registering asynchronous event callbacks... 00:23:39.452 Starting namespace attribute notice tests for all controllers... 00:23:39.452 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:39.452 aer_cb - Changed Namespace 00:23:39.452 Cleaning up... 00:23:39.452 [ 00:23:39.452 { 00:23:39.452 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:39.452 "subtype": "Discovery", 00:23:39.452 "listen_addresses": [], 00:23:39.452 "allow_any_host": true, 00:23:39.452 "hosts": [] 00:23:39.452 }, 00:23:39.452 { 00:23:39.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.452 "subtype": "NVMe", 00:23:39.452 "listen_addresses": [ 00:23:39.452 { 00:23:39.452 "trtype": "TCP", 00:23:39.452 "adrfam": "IPv4", 00:23:39.452 "traddr": "10.0.0.2", 00:23:39.452 "trsvcid": "4420" 00:23:39.452 } 00:23:39.452 ], 00:23:39.452 "allow_any_host": true, 00:23:39.452 "hosts": [], 00:23:39.452 "serial_number": "SPDK00000000000001", 00:23:39.452 "model_number": "SPDK bdev Controller", 00:23:39.452 "max_namespaces": 2, 00:23:39.452 "min_cntlid": 1, 00:23:39.452 "max_cntlid": 65519, 00:23:39.452 "namespaces": [ 00:23:39.452 { 00:23:39.452 "nsid": 1, 00:23:39.452 "bdev_name": "Malloc0", 00:23:39.452 "name": "Malloc0", 00:23:39.452 "nguid": "D512505CEF174ECA92D945C9DC928EDA", 00:23:39.452 "uuid": "d512505c-ef17-4eca-92d9-45c9dc928eda" 00:23:39.452 }, 00:23:39.452 { 00:23:39.452 "nsid": 2, 00:23:39.452 "bdev_name": "Malloc1", 00:23:39.452 "name": "Malloc1", 00:23:39.452 "nguid": "A729500C67BC4BB58A0B22E916369A73", 00:23:39.452 "uuid": "a729500c-67bc-4bb5-8a0b-22e916369a73" 00:23:39.452 } 00:23:39.452 ] 00:23:39.452 } 00:23:39.452 ] 00:23:39.452 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.452 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3484994 00:23:39.452 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:39.452 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.452 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:39.713 rmmod nvme_tcp 00:23:39.713 rmmod nvme_fabrics 00:23:39.713 rmmod nvme_keyring 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3484643 ']' 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3484643 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 3484643 ']' 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 3484643 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3484643 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3484643' 00:23:39.713 killing process with pid 3484643 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 3484643 00:23:39.713 07:37:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 3484643 00:23:39.974 07:37:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:39.974 07:37:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:39.974 07:37:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:39.974 07:37:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:39.974 07:37:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:39.974 07:37:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:39.974 07:37:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:39.974 07:37:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:39.974 07:37:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:39.974 07:37:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.974 07:37:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.974 07:37:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.887 07:38:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:42.149 00:23:42.149 real 0m11.737s 00:23:42.149 user 0m8.248s 00:23:42.149 sys 0m6.276s 00:23:42.149 07:38:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:42.149 07:38:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.149 ************************************ 00:23:42.149 END TEST nvmf_aer 00:23:42.149 ************************************ 00:23:42.149 07:38:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:42.149 07:38:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:42.149 07:38:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:42.149 07:38:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.149 ************************************ 00:23:42.149 START TEST nvmf_async_init 00:23:42.149 ************************************ 00:23:42.149 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:42.149 * Looking for test storage... 00:23:42.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:42.149 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:42.149 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:23:42.149 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:42.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.411 --rc genhtml_branch_coverage=1 00:23:42.411 --rc genhtml_function_coverage=1 00:23:42.411 --rc genhtml_legend=1 00:23:42.411 --rc geninfo_all_blocks=1 00:23:42.411 --rc geninfo_unexecuted_blocks=1 00:23:42.411 00:23:42.411 ' 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:42.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.411 --rc genhtml_branch_coverage=1 00:23:42.411 --rc genhtml_function_coverage=1 00:23:42.411 --rc genhtml_legend=1 00:23:42.411 --rc geninfo_all_blocks=1 00:23:42.411 --rc geninfo_unexecuted_blocks=1 00:23:42.411 00:23:42.411 ' 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:42.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.411 --rc genhtml_branch_coverage=1 00:23:42.411 --rc genhtml_function_coverage=1 00:23:42.411 --rc genhtml_legend=1 00:23:42.411 --rc geninfo_all_blocks=1 00:23:42.411 --rc geninfo_unexecuted_blocks=1 00:23:42.411 00:23:42.411 ' 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:42.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.411 --rc genhtml_branch_coverage=1 00:23:42.411 --rc genhtml_function_coverage=1 00:23:42.411 --rc genhtml_legend=1 00:23:42.411 --rc geninfo_all_blocks=1 00:23:42.411 --rc geninfo_unexecuted_blocks=1 00:23:42.411 00:23:42.411 ' 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.411 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:42.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=079e61caaeeb4e86b8f91a7e4aef80b9 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:42.412 07:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.557 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.557 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:50.557 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:50.557 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:50.557 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:50.557 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:50.557 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:50.557 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:50.557 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:50.557 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:50.557 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:50.557 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:50.557 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:50.557 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:50.557 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:50.557 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.557 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:50.558 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:50.558 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:50.558 Found net devices under 0000:31:00.0: cvl_0_0 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:50.558 Found net devices under 0000:31:00.1: cvl_0_1 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:50.558 07:38:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:50.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:23:50.558 00:23:50.558 --- 10.0.0.2 ping statistics --- 00:23:50.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.558 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:23:50.558 00:23:50.558 --- 10.0.0.1 ping statistics --- 00:23:50.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.558 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3489350 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3489350 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:50.558 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 3489350 ']' 00:23:50.559 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.559 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:50.559 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.559 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:50.559 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.559 [2024-11-20 07:38:08.159516] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:23:50.559 [2024-11-20 07:38:08.159582] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.559 [2024-11-20 07:38:08.258849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.559 [2024-11-20 07:38:08.310170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.559 [2024-11-20 07:38:08.310220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.559 [2024-11-20 07:38:08.310229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.559 [2024-11-20 07:38:08.310236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.559 [2024-11-20 07:38:08.310243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.559 [2024-11-20 07:38:08.311082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.820 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:50.820 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:23:50.820 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:50.820 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:50.820 07:38:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.820 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.820 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:50.820 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.820 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.820 [2024-11-20 07:38:09.022364] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.082 null0 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 079e61caaeeb4e86b8f91a7e4aef80b9 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.082 [2024-11-20 07:38:09.082773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.082 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.343 nvme0n1 00:23:51.343 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.343 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:51.343 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.343 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.343 [ 00:23:51.343 { 00:23:51.343 "name": "nvme0n1", 00:23:51.343 "aliases": [ 00:23:51.343 "079e61ca-aeeb-4e86-b8f9-1a7e4aef80b9" 00:23:51.343 ], 00:23:51.343 "product_name": "NVMe disk", 00:23:51.343 "block_size": 512, 00:23:51.343 "num_blocks": 2097152, 00:23:51.343 "uuid": "079e61ca-aeeb-4e86-b8f9-1a7e4aef80b9", 00:23:51.343 "numa_id": 0, 00:23:51.343 "assigned_rate_limits": { 00:23:51.343 "rw_ios_per_sec": 0, 00:23:51.343 "rw_mbytes_per_sec": 0, 00:23:51.343 "r_mbytes_per_sec": 0, 00:23:51.343 "w_mbytes_per_sec": 0 00:23:51.343 }, 00:23:51.343 "claimed": false, 00:23:51.343 "zoned": false, 00:23:51.343 "supported_io_types": { 00:23:51.343 "read": true, 00:23:51.343 "write": true, 00:23:51.343 "unmap": false, 00:23:51.343 "flush": true, 00:23:51.343 "reset": true, 00:23:51.343 "nvme_admin": true, 00:23:51.343 "nvme_io": true, 00:23:51.343 "nvme_io_md": false, 00:23:51.343 "write_zeroes": true, 00:23:51.344 "zcopy": false, 00:23:51.344 "get_zone_info": false, 00:23:51.344 "zone_management": false, 00:23:51.344 "zone_append": false, 00:23:51.344 "compare": true, 00:23:51.344 "compare_and_write": true, 00:23:51.344 "abort": true, 00:23:51.344 "seek_hole": false, 00:23:51.344 "seek_data": false, 00:23:51.344 "copy": true, 00:23:51.344 "nvme_iov_md": false 00:23:51.344 }, 00:23:51.344 "memory_domains": [ 00:23:51.344 { 00:23:51.344 "dma_device_id": "system", 00:23:51.344 "dma_device_type": 1 00:23:51.344 } 00:23:51.344 ], 00:23:51.344 "driver_specific": { 00:23:51.344 "nvme": [ 00:23:51.344 { 00:23:51.344 "trid": { 00:23:51.344 "trtype": "TCP", 00:23:51.344 "adrfam": "IPv4", 00:23:51.344 "traddr": "10.0.0.2", 00:23:51.344 "trsvcid": "4420", 00:23:51.344 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:51.344 }, 00:23:51.344 "ctrlr_data": { 00:23:51.344 "cntlid": 1, 00:23:51.344 "vendor_id": "0x8086", 00:23:51.344 "model_number": "SPDK bdev Controller", 00:23:51.344 "serial_number": "00000000000000000000", 00:23:51.344 "firmware_revision": "25.01", 00:23:51.344 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.344 "oacs": { 00:23:51.344 "security": 0, 00:23:51.344 "format": 0, 00:23:51.344 "firmware": 0, 00:23:51.344 "ns_manage": 0 00:23:51.344 }, 00:23:51.344 "multi_ctrlr": true, 00:23:51.344 "ana_reporting": false 00:23:51.344 }, 00:23:51.344 "vs": { 00:23:51.344 "nvme_version": "1.3" 00:23:51.344 }, 00:23:51.344 "ns_data": { 00:23:51.344 "id": 1, 00:23:51.344 "can_share": true 00:23:51.344 } 00:23:51.344 } 00:23:51.344 ], 00:23:51.344 "mp_policy": "active_passive" 00:23:51.344 } 00:23:51.344 } 00:23:51.344 ] 00:23:51.344 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.344 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:51.344 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.344 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.344 [2024-11-20 07:38:09.359217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:51.344 [2024-11-20 07:38:09.359311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8734a0 (9): Bad file descriptor 00:23:51.344 [2024-11-20 07:38:09.490853] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:51.344 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.344 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:51.344 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.344 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.344 [ 00:23:51.344 { 00:23:51.344 "name": "nvme0n1", 00:23:51.344 "aliases": [ 00:23:51.344 "079e61ca-aeeb-4e86-b8f9-1a7e4aef80b9" 00:23:51.344 ], 00:23:51.344 "product_name": "NVMe disk", 00:23:51.344 "block_size": 512, 00:23:51.344 "num_blocks": 2097152, 00:23:51.344 "uuid": "079e61ca-aeeb-4e86-b8f9-1a7e4aef80b9", 00:23:51.344 "numa_id": 0, 00:23:51.344 "assigned_rate_limits": { 00:23:51.344 "rw_ios_per_sec": 0, 00:23:51.344 "rw_mbytes_per_sec": 0, 00:23:51.344 "r_mbytes_per_sec": 0, 00:23:51.344 "w_mbytes_per_sec": 0 00:23:51.344 }, 00:23:51.344 "claimed": false, 00:23:51.344 "zoned": false, 00:23:51.344 "supported_io_types": { 00:23:51.344 "read": true, 00:23:51.344 "write": true, 00:23:51.344 "unmap": false, 00:23:51.344 "flush": true, 00:23:51.344 "reset": true, 00:23:51.344 "nvme_admin": true, 00:23:51.344 "nvme_io": true, 00:23:51.344 "nvme_io_md": false, 00:23:51.344 "write_zeroes": true, 00:23:51.344 "zcopy": false, 00:23:51.344 "get_zone_info": false, 00:23:51.344 "zone_management": false, 00:23:51.344 "zone_append": false, 00:23:51.344 "compare": true, 00:23:51.344 "compare_and_write": true, 00:23:51.344 "abort": true, 00:23:51.344 "seek_hole": false, 00:23:51.344 "seek_data": false, 00:23:51.344 "copy": true, 00:23:51.344 "nvme_iov_md": false 00:23:51.344 }, 00:23:51.344 "memory_domains": [ 00:23:51.344 { 00:23:51.344 "dma_device_id": "system", 00:23:51.344 "dma_device_type": 1 00:23:51.344 } 00:23:51.344 ], 00:23:51.344 "driver_specific": { 00:23:51.344 "nvme": [ 00:23:51.344 { 00:23:51.344 "trid": { 00:23:51.344 "trtype": "TCP", 00:23:51.344 "adrfam": "IPv4", 00:23:51.344 "traddr": "10.0.0.2", 00:23:51.344 "trsvcid": "4420", 00:23:51.344 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:51.344 }, 00:23:51.344 "ctrlr_data": { 00:23:51.344 "cntlid": 2, 00:23:51.344 "vendor_id": "0x8086", 00:23:51.344 "model_number": "SPDK bdev Controller", 00:23:51.344 "serial_number": "00000000000000000000", 00:23:51.344 "firmware_revision": "25.01", 00:23:51.344 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.344 "oacs": { 00:23:51.344 "security": 0, 00:23:51.344 "format": 0, 00:23:51.344 "firmware": 0, 00:23:51.344 "ns_manage": 0 00:23:51.344 }, 00:23:51.344 "multi_ctrlr": true, 00:23:51.344 "ana_reporting": false 00:23:51.344 }, 00:23:51.344 "vs": { 00:23:51.344 "nvme_version": "1.3" 00:23:51.344 }, 00:23:51.344 "ns_data": { 00:23:51.344 "id": 1, 00:23:51.344 "can_share": true 00:23:51.344 } 00:23:51.344 } 00:23:51.344 ], 00:23:51.345 "mp_policy": "active_passive" 00:23:51.345 } 00:23:51.345 } 00:23:51.345 ] 00:23:51.345 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.345 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.345 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.345 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.345 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.345 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:51.345 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.I0E1yLos9u 00:23:51.345 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:51.345 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.I0E1yLos9u 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.I0E1yLos9u 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.606 [2024-11-20 07:38:09.579906] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:51.606 [2024-11-20 07:38:09.580073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.606 [2024-11-20 07:38:09.603986] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.606 nvme0n1 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.606 [ 00:23:51.606 { 00:23:51.606 "name": "nvme0n1", 00:23:51.606 "aliases": [ 00:23:51.606 "079e61ca-aeeb-4e86-b8f9-1a7e4aef80b9" 00:23:51.606 ], 00:23:51.606 "product_name": "NVMe disk", 00:23:51.606 "block_size": 512, 00:23:51.606 "num_blocks": 2097152, 00:23:51.606 "uuid": "079e61ca-aeeb-4e86-b8f9-1a7e4aef80b9", 00:23:51.606 "numa_id": 0, 00:23:51.606 "assigned_rate_limits": { 00:23:51.606 "rw_ios_per_sec": 0, 00:23:51.606 "rw_mbytes_per_sec": 0, 00:23:51.606 "r_mbytes_per_sec": 0, 00:23:51.606 "w_mbytes_per_sec": 0 00:23:51.606 }, 00:23:51.606 "claimed": false, 00:23:51.606 "zoned": false, 00:23:51.606 "supported_io_types": { 00:23:51.606 "read": true, 00:23:51.606 "write": true, 00:23:51.606 "unmap": false, 00:23:51.606 "flush": true, 00:23:51.606 "reset": true, 00:23:51.606 "nvme_admin": true, 00:23:51.606 "nvme_io": true, 00:23:51.606 "nvme_io_md": false, 00:23:51.606 "write_zeroes": true, 00:23:51.606 "zcopy": false, 00:23:51.606 "get_zone_info": false, 00:23:51.606 "zone_management": false, 00:23:51.606 "zone_append": false, 00:23:51.606 "compare": true, 00:23:51.606 "compare_and_write": true, 00:23:51.606 "abort": true, 00:23:51.606 "seek_hole": false, 00:23:51.606 "seek_data": false, 00:23:51.606 "copy": true, 00:23:51.606 "nvme_iov_md": false 00:23:51.606 }, 00:23:51.606 "memory_domains": [ 00:23:51.606 { 00:23:51.606 "dma_device_id": "system", 00:23:51.606 "dma_device_type": 1 00:23:51.606 } 00:23:51.606 ], 00:23:51.606 "driver_specific": { 00:23:51.606 "nvme": [ 00:23:51.606 { 00:23:51.606 "trid": { 00:23:51.606 "trtype": "TCP", 00:23:51.606 "adrfam": "IPv4", 00:23:51.606 "traddr": "10.0.0.2", 00:23:51.606 "trsvcid": "4421", 00:23:51.606 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:51.606 }, 00:23:51.606 "ctrlr_data": { 00:23:51.606 "cntlid": 3, 00:23:51.606 "vendor_id": "0x8086", 00:23:51.606 "model_number": "SPDK bdev Controller", 00:23:51.606 "serial_number": "00000000000000000000", 00:23:51.606 "firmware_revision": "25.01", 00:23:51.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.606 "oacs": { 00:23:51.606 "security": 0, 00:23:51.606 "format": 0, 00:23:51.606 "firmware": 0, 00:23:51.606 "ns_manage": 0 00:23:51.606 }, 00:23:51.606 "multi_ctrlr": true, 00:23:51.606 "ana_reporting": false 00:23:51.606 }, 00:23:51.606 "vs": { 00:23:51.606 "nvme_version": "1.3" 00:23:51.606 }, 00:23:51.606 "ns_data": { 00:23:51.606 "id": 1, 00:23:51.606 "can_share": true 00:23:51.606 } 00:23:51.606 } 00:23:51.606 ], 00:23:51.606 "mp_policy": "active_passive" 00:23:51.606 } 00:23:51.606 } 00:23:51.606 ] 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.I0E1yLos9u 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:51.606 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:51.607 rmmod nvme_tcp 00:23:51.607 rmmod nvme_fabrics 00:23:51.607 rmmod nvme_keyring 00:23:51.607 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:51.607 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:51.607 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:51.607 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3489350 ']' 00:23:51.607 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3489350 00:23:51.607 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 3489350 ']' 00:23:51.607 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 3489350 00:23:51.607 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:23:51.868 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:51.868 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3489350 00:23:51.868 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:51.868 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:51.868 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3489350' 00:23:51.868 killing process with pid 3489350 00:23:51.868 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 3489350 00:23:51.868 07:38:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 3489350 00:23:51.868 07:38:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:51.868 07:38:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:51.868 07:38:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:51.868 07:38:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:51.868 07:38:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:51.868 07:38:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:51.868 07:38:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:51.868 07:38:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:51.868 07:38:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:51.868 07:38:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.868 07:38:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.868 07:38:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.437 00:23:54.437 real 0m11.933s 00:23:54.437 user 0m4.298s 00:23:54.437 sys 0m6.178s 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.437 ************************************ 00:23:54.437 END TEST nvmf_async_init 00:23:54.437 ************************************ 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.437 ************************************ 00:23:54.437 START TEST dma 00:23:54.437 ************************************ 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:54.437 * Looking for test storage... 00:23:54.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:54.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.437 --rc genhtml_branch_coverage=1 00:23:54.437 --rc genhtml_function_coverage=1 00:23:54.437 --rc genhtml_legend=1 00:23:54.437 --rc geninfo_all_blocks=1 00:23:54.437 --rc geninfo_unexecuted_blocks=1 00:23:54.437 00:23:54.437 ' 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:54.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.437 --rc genhtml_branch_coverage=1 00:23:54.437 --rc genhtml_function_coverage=1 00:23:54.437 --rc genhtml_legend=1 00:23:54.437 --rc geninfo_all_blocks=1 00:23:54.437 --rc geninfo_unexecuted_blocks=1 00:23:54.437 00:23:54.437 ' 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:54.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.437 --rc genhtml_branch_coverage=1 00:23:54.437 --rc genhtml_function_coverage=1 00:23:54.437 --rc genhtml_legend=1 00:23:54.437 --rc geninfo_all_blocks=1 00:23:54.437 --rc geninfo_unexecuted_blocks=1 00:23:54.437 00:23:54.437 ' 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:54.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.437 --rc genhtml_branch_coverage=1 00:23:54.437 --rc genhtml_function_coverage=1 00:23:54.437 --rc genhtml_legend=1 00:23:54.437 --rc geninfo_all_blocks=1 00:23:54.437 --rc geninfo_unexecuted_blocks=1 00:23:54.437 00:23:54.437 ' 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.437 07:38:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:54.438 00:23:54.438 real 0m0.237s 00:23:54.438 user 0m0.147s 00:23:54.438 sys 0m0.105s 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:54.438 ************************************ 00:23:54.438 END TEST dma 00:23:54.438 ************************************ 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.438 ************************************ 00:23:54.438 START TEST nvmf_identify 00:23:54.438 ************************************ 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:54.438 * Looking for test storage... 00:23:54.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:23:54.438 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:54.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.700 --rc genhtml_branch_coverage=1 00:23:54.700 --rc genhtml_function_coverage=1 00:23:54.700 --rc genhtml_legend=1 00:23:54.700 --rc geninfo_all_blocks=1 00:23:54.700 --rc geninfo_unexecuted_blocks=1 00:23:54.700 00:23:54.700 ' 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:54.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.700 --rc genhtml_branch_coverage=1 00:23:54.700 --rc genhtml_function_coverage=1 00:23:54.700 --rc genhtml_legend=1 00:23:54.700 --rc geninfo_all_blocks=1 00:23:54.700 --rc geninfo_unexecuted_blocks=1 00:23:54.700 00:23:54.700 ' 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:54.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.700 --rc genhtml_branch_coverage=1 00:23:54.700 --rc genhtml_function_coverage=1 00:23:54.700 --rc genhtml_legend=1 00:23:54.700 --rc geninfo_all_blocks=1 00:23:54.700 --rc geninfo_unexecuted_blocks=1 00:23:54.700 00:23:54.700 ' 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:54.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.700 --rc genhtml_branch_coverage=1 00:23:54.700 --rc genhtml_function_coverage=1 00:23:54.700 --rc genhtml_legend=1 00:23:54.700 --rc geninfo_all_blocks=1 00:23:54.700 --rc geninfo_unexecuted_blocks=1 00:23:54.700 00:23:54.700 ' 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:54.700 07:38:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:02.837 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.837 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:02.838 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:02.838 Found net devices under 0000:31:00.0: cvl_0_0 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:02.838 Found net devices under 0000:31:00.1: cvl_0_1 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:02.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:24:02.838 00:24:02.838 --- 10.0.0.2 ping statistics --- 00:24:02.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.838 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:24:02.838 00:24:02.838 --- 10.0.0.1 ping statistics --- 00:24:02.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.838 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3493973 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3493973 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 3493973 ']' 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:02.838 07:38:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.838 [2024-11-20 07:38:20.518784] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:24:02.838 [2024-11-20 07:38:20.518849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.838 [2024-11-20 07:38:20.623177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:02.838 [2024-11-20 07:38:20.677980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.838 [2024-11-20 07:38:20.678037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.838 [2024-11-20 07:38:20.678046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.838 [2024-11-20 07:38:20.678053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.838 [2024-11-20 07:38:20.678060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.838 [2024-11-20 07:38:20.680096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.838 [2024-11-20 07:38:20.680255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.838 [2024-11-20 07:38:20.680419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:02.838 [2024-11-20 07:38:20.680422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.412 [2024-11-20 07:38:21.350345] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.412 Malloc0 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.412 [2024-11-20 07:38:21.468548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.412 [ 00:24:03.412 { 00:24:03.412 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:03.412 "subtype": "Discovery", 00:24:03.412 "listen_addresses": [ 00:24:03.412 { 00:24:03.412 "trtype": "TCP", 00:24:03.412 "adrfam": "IPv4", 00:24:03.412 "traddr": "10.0.0.2", 00:24:03.412 "trsvcid": "4420" 00:24:03.412 } 00:24:03.412 ], 00:24:03.412 "allow_any_host": true, 00:24:03.412 "hosts": [] 00:24:03.412 }, 00:24:03.412 { 00:24:03.412 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.412 "subtype": "NVMe", 00:24:03.412 "listen_addresses": [ 00:24:03.412 { 00:24:03.412 "trtype": "TCP", 00:24:03.412 "adrfam": "IPv4", 00:24:03.412 "traddr": "10.0.0.2", 00:24:03.412 "trsvcid": "4420" 00:24:03.412 } 00:24:03.412 ], 00:24:03.412 "allow_any_host": true, 00:24:03.412 "hosts": [], 00:24:03.412 "serial_number": "SPDK00000000000001", 00:24:03.412 "model_number": "SPDK bdev Controller", 00:24:03.412 "max_namespaces": 32, 00:24:03.412 "min_cntlid": 1, 00:24:03.412 "max_cntlid": 65519, 00:24:03.412 "namespaces": [ 00:24:03.412 { 00:24:03.412 "nsid": 1, 00:24:03.412 "bdev_name": "Malloc0", 00:24:03.412 "name": "Malloc0", 00:24:03.412 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:03.412 "eui64": "ABCDEF0123456789", 00:24:03.412 "uuid": "318a4173-f6dc-4f2d-a294-a7ae43a79416" 00:24:03.412 } 00:24:03.412 ] 00:24:03.412 } 00:24:03.412 ] 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.412 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:03.412 [2024-11-20 07:38:21.532543] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:24:03.412 [2024-11-20 07:38:21.532590] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3494155 ] 00:24:03.412 [2024-11-20 07:38:21.589442] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:03.412 [2024-11-20 07:38:21.589522] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:03.412 [2024-11-20 07:38:21.589528] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:03.412 [2024-11-20 07:38:21.589546] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:03.412 [2024-11-20 07:38:21.589559] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:03.412 [2024-11-20 07:38:21.593163] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:03.412 [2024-11-20 07:38:21.593212] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d9a550 0 00:24:03.412 [2024-11-20 07:38:21.600762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:03.412 [2024-11-20 07:38:21.600780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:03.412 [2024-11-20 07:38:21.600786] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:03.412 [2024-11-20 07:38:21.600794] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:03.412 [2024-11-20 07:38:21.600840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.412 [2024-11-20 07:38:21.600847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.412 [2024-11-20 07:38:21.600852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9a550) 00:24:03.412 [2024-11-20 07:38:21.600869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:03.412 [2024-11-20 07:38:21.600893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc100, cid 0, qid 0 00:24:03.412 [2024-11-20 07:38:21.608760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.412 [2024-11-20 07:38:21.608770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.412 [2024-11-20 07:38:21.608774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.412 [2024-11-20 07:38:21.608779] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc100) on tqpair=0x1d9a550 00:24:03.412 [2024-11-20 07:38:21.608791] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:03.412 [2024-11-20 07:38:21.608800] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:03.412 [2024-11-20 07:38:21.608806] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:03.412 [2024-11-20 07:38:21.608823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.412 [2024-11-20 07:38:21.608827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.412 [2024-11-20 07:38:21.608830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9a550) 00:24:03.412 [2024-11-20 07:38:21.608839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.412 [2024-11-20 07:38:21.608855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc100, cid 0, qid 0 00:24:03.412 [2024-11-20 07:38:21.609110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.413 [2024-11-20 07:38:21.609116] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.413 [2024-11-20 07:38:21.609120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.609124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc100) on tqpair=0x1d9a550 00:24:03.413 [2024-11-20 07:38:21.609130] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:03.413 [2024-11-20 07:38:21.609137] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:03.413 [2024-11-20 07:38:21.609145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.609149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.609152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9a550) 00:24:03.413 [2024-11-20 07:38:21.609159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.413 [2024-11-20 07:38:21.609170] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc100, cid 0, qid 0 00:24:03.413 [2024-11-20 07:38:21.609359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.413 [2024-11-20 07:38:21.609365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.413 [2024-11-20 07:38:21.609369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.609373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc100) on tqpair=0x1d9a550 00:24:03.413 [2024-11-20 07:38:21.609378] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:03.413 [2024-11-20 07:38:21.609387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:03.413 [2024-11-20 07:38:21.609398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.609402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.609405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9a550) 00:24:03.413 [2024-11-20 07:38:21.609412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.413 [2024-11-20 07:38:21.609424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc100, cid 0, qid 0 00:24:03.413 [2024-11-20 07:38:21.609593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.413 [2024-11-20 07:38:21.609600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.413 [2024-11-20 07:38:21.609603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.609607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc100) on tqpair=0x1d9a550 00:24:03.413 [2024-11-20 07:38:21.609613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:03.413 [2024-11-20 07:38:21.609622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.609626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.609630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9a550) 00:24:03.413 [2024-11-20 07:38:21.609637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.413 [2024-11-20 07:38:21.609647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc100, cid 0, qid 0 00:24:03.413 [2024-11-20 07:38:21.609829] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.413 [2024-11-20 07:38:21.609835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.413 [2024-11-20 07:38:21.609839] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.609843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc100) on tqpair=0x1d9a550 00:24:03.413 [2024-11-20 07:38:21.609848] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:03.413 [2024-11-20 07:38:21.609854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:03.413 [2024-11-20 07:38:21.609862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:03.413 [2024-11-20 07:38:21.609971] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:03.413 [2024-11-20 07:38:21.609976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:03.413 [2024-11-20 07:38:21.609986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.609990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.609994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9a550) 00:24:03.413 [2024-11-20 07:38:21.610001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.413 [2024-11-20 07:38:21.610012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc100, cid 0, qid 0 00:24:03.413 [2024-11-20 07:38:21.610201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.413 [2024-11-20 07:38:21.610207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.413 [2024-11-20 07:38:21.610211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.610215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc100) on tqpair=0x1d9a550 00:24:03.413 [2024-11-20 07:38:21.610225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:03.413 [2024-11-20 07:38:21.610235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.610239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.610243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9a550) 00:24:03.413 [2024-11-20 07:38:21.610250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.413 [2024-11-20 07:38:21.610260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc100, cid 0, qid 0 00:24:03.413 [2024-11-20 07:38:21.610429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.413 [2024-11-20 07:38:21.610436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.413 [2024-11-20 07:38:21.610439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.610443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc100) on tqpair=0x1d9a550 00:24:03.413 [2024-11-20 07:38:21.610448] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:03.413 [2024-11-20 07:38:21.610452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:03.413 [2024-11-20 07:38:21.610461] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:03.413 [2024-11-20 07:38:21.610471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:03.413 [2024-11-20 07:38:21.610482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.610486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9a550) 00:24:03.413 [2024-11-20 07:38:21.610493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.413 [2024-11-20 07:38:21.610503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc100, cid 0, qid 0 00:24:03.413 [2024-11-20 07:38:21.610720] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.413 [2024-11-20 07:38:21.610726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.413 [2024-11-20 07:38:21.610730] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.610735] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d9a550): datao=0, datal=4096, cccid=0 00:24:03.413 [2024-11-20 07:38:21.610739] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dfc100) on tqpair(0x1d9a550): expected_datao=0, payload_size=4096 00:24:03.413 [2024-11-20 07:38:21.610749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.610777] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.610782] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.610964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.413 [2024-11-20 07:38:21.610970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.413 [2024-11-20 07:38:21.610974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.610978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc100) on tqpair=0x1d9a550 00:24:03.413 [2024-11-20 07:38:21.610987] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:03.413 [2024-11-20 07:38:21.610992] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:03.413 [2024-11-20 07:38:21.611000] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:03.413 [2024-11-20 07:38:21.611009] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:03.413 [2024-11-20 07:38:21.611014] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:03.413 [2024-11-20 07:38:21.611018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:03.413 [2024-11-20 07:38:21.611030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:03.413 [2024-11-20 07:38:21.611038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.611042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.611046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9a550) 00:24:03.413 [2024-11-20 07:38:21.611054] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:03.413 [2024-11-20 07:38:21.611065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc100, cid 0, qid 0 00:24:03.413 [2024-11-20 07:38:21.611259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.413 [2024-11-20 07:38:21.611265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.413 [2024-11-20 07:38:21.611269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.611273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc100) on tqpair=0x1d9a550 00:24:03.413 [2024-11-20 07:38:21.611282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.611286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.413 [2024-11-20 07:38:21.611289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9a550) 00:24:03.414 [2024-11-20 07:38:21.611295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.414 [2024-11-20 07:38:21.611302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.414 [2024-11-20 07:38:21.611306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.414 [2024-11-20 07:38:21.611309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d9a550) 00:24:03.414 [2024-11-20 07:38:21.611315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.414 [2024-11-20 07:38:21.611321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.414 [2024-11-20 07:38:21.611325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.414 [2024-11-20 07:38:21.611329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d9a550) 00:24:03.414 [2024-11-20 07:38:21.611334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.414 [2024-11-20 07:38:21.611341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.414 [2024-11-20 07:38:21.611344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.414 [2024-11-20 07:38:21.611348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d9a550) 00:24:03.414 [2024-11-20 07:38:21.611354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.414 [2024-11-20 07:38:21.611359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:03.414 [2024-11-20 07:38:21.611367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:03.414 [2024-11-20 07:38:21.611374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.414 [2024-11-20 07:38:21.611380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d9a550) 00:24:03.414 [2024-11-20 07:38:21.611388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.414 [2024-11-20 07:38:21.611400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc100, cid 0, qid 0 00:24:03.414 [2024-11-20 07:38:21.611405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc280, cid 1, qid 0 00:24:03.414 [2024-11-20 07:38:21.611410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc400, cid 2, qid 0 00:24:03.414 [2024-11-20 07:38:21.611414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc580, cid 3, qid 0 00:24:03.414 [2024-11-20 07:38:21.611419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc700, cid 4, qid 0 00:24:03.414 [2024-11-20 07:38:21.611670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.414 [2024-11-20 07:38:21.611676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.414 [2024-11-20 07:38:21.611679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.414 [2024-11-20 07:38:21.611683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc700) on tqpair=0x1d9a550 00:24:03.414 [2024-11-20 07:38:21.611692] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:03.414 [2024-11-20 07:38:21.611698] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:03.414 [2024-11-20 07:38:21.611708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.414 [2024-11-20 07:38:21.611712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d9a550) 00:24:03.414 [2024-11-20 07:38:21.611719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.414 [2024-11-20 07:38:21.611729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc700, cid 4, qid 0 00:24:03.414 [2024-11-20 07:38:21.611937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.414 [2024-11-20 07:38:21.611944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.414 [2024-11-20 07:38:21.611947] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.414 [2024-11-20 07:38:21.611951] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d9a550): datao=0, datal=4096, cccid=4 00:24:03.414 [2024-11-20 07:38:21.611956] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dfc700) on tqpair(0x1d9a550): expected_datao=0, payload_size=4096 00:24:03.414 [2024-11-20 07:38:21.611960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.414 [2024-11-20 07:38:21.611973] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.414 [2024-11-20 07:38:21.611977] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.679 [2024-11-20 07:38:21.656754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.679 [2024-11-20 07:38:21.656766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.679 [2024-11-20 07:38:21.656770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.679 [2024-11-20 07:38:21.656774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc700) on tqpair=0x1d9a550 00:24:03.679 [2024-11-20 07:38:21.656791] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:03.679 [2024-11-20 07:38:21.656823] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.679 [2024-11-20 07:38:21.656828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d9a550) 00:24:03.679 [2024-11-20 07:38:21.656835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.679 [2024-11-20 07:38:21.656843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.679 [2024-11-20 07:38:21.656850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.679 [2024-11-20 07:38:21.656854] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d9a550) 00:24:03.679 [2024-11-20 07:38:21.656860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.679 [2024-11-20 07:38:21.656878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc700, cid 4, qid 0 00:24:03.679 [2024-11-20 07:38:21.656884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc880, cid 5, qid 0 00:24:03.679 [2024-11-20 07:38:21.657143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.679 [2024-11-20 07:38:21.657150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.679 [2024-11-20 07:38:21.657153] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.679 [2024-11-20 07:38:21.657157] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d9a550): datao=0, datal=1024, cccid=4 00:24:03.679 [2024-11-20 07:38:21.657162] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dfc700) on tqpair(0x1d9a550): expected_datao=0, payload_size=1024 00:24:03.679 [2024-11-20 07:38:21.657166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.679 [2024-11-20 07:38:21.657173] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.679 [2024-11-20 07:38:21.657177] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.679 [2024-11-20 07:38:21.657183] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.679 [2024-11-20 07:38:21.657189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.679 [2024-11-20 07:38:21.657192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.679 [2024-11-20 07:38:21.657196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc880) on tqpair=0x1d9a550 00:24:03.679 [2024-11-20 07:38:21.701755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.679 [2024-11-20 07:38:21.701766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.679 [2024-11-20 07:38:21.701770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.679 [2024-11-20 07:38:21.701774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc700) on tqpair=0x1d9a550 00:24:03.679 [2024-11-20 07:38:21.701788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.679 [2024-11-20 07:38:21.701792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d9a550) 00:24:03.679 [2024-11-20 07:38:21.701800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.679 [2024-11-20 07:38:21.701817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc700, cid 4, qid 0 00:24:03.679 [2024-11-20 07:38:21.702101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.679 [2024-11-20 07:38:21.702107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.679 [2024-11-20 07:38:21.702111] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.679 [2024-11-20 07:38:21.702115] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d9a550): datao=0, datal=3072, cccid=4 00:24:03.680 [2024-11-20 07:38:21.702119] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dfc700) on tqpair(0x1d9a550): expected_datao=0, payload_size=3072 00:24:03.680 [2024-11-20 07:38:21.702124] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.680 [2024-11-20 07:38:21.702159] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.680 [2024-11-20 07:38:21.702163] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.680 [2024-11-20 07:38:21.702355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.680 [2024-11-20 07:38:21.702362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.680 [2024-11-20 07:38:21.702365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.680 [2024-11-20 07:38:21.702369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc700) on tqpair=0x1d9a550 00:24:03.680 [2024-11-20 07:38:21.702382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.680 [2024-11-20 07:38:21.702386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d9a550) 00:24:03.680 [2024-11-20 07:38:21.702393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.680 [2024-11-20 07:38:21.702407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc700, cid 4, qid 0 00:24:03.680 [2024-11-20 07:38:21.702652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.680 [2024-11-20 07:38:21.702659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.680 [2024-11-20 07:38:21.702662] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.680 [2024-11-20 07:38:21.702666] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d9a550): datao=0, datal=8, cccid=4 00:24:03.680 [2024-11-20 07:38:21.702670] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dfc700) on tqpair(0x1d9a550): expected_datao=0, payload_size=8 00:24:03.680 [2024-11-20 07:38:21.702675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.680 [2024-11-20 07:38:21.702681] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.680 [2024-11-20 07:38:21.702685] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.680 [2024-11-20 07:38:21.743949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.680 [2024-11-20 07:38:21.743961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.680 [2024-11-20 07:38:21.743965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.680 [2024-11-20 07:38:21.743969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc700) on tqpair=0x1d9a550 00:24:03.680 ===================================================== 00:24:03.680 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:03.680 ===================================================== 00:24:03.680 Controller Capabilities/Features 00:24:03.680 ================================ 00:24:03.680 Vendor ID: 0000 00:24:03.680 Subsystem Vendor ID: 0000 00:24:03.680 Serial Number: .................... 00:24:03.680 Model Number: ........................................ 00:24:03.680 Firmware Version: 25.01 00:24:03.680 Recommended Arb Burst: 0 00:24:03.680 IEEE OUI Identifier: 00 00 00 00:24:03.680 Multi-path I/O 00:24:03.680 May have multiple subsystem ports: No 00:24:03.680 May have multiple controllers: No 00:24:03.680 Associated with SR-IOV VF: No 00:24:03.680 Max Data Transfer Size: 131072 00:24:03.680 Max Number of Namespaces: 0 00:24:03.680 Max Number of I/O Queues: 1024 00:24:03.680 NVMe Specification Version (VS): 1.3 00:24:03.680 NVMe Specification Version (Identify): 1.3 00:24:03.680 Maximum Queue Entries: 128 00:24:03.680 Contiguous Queues Required: Yes 00:24:03.680 Arbitration Mechanisms Supported 00:24:03.680 Weighted Round Robin: Not Supported 00:24:03.680 Vendor Specific: Not Supported 00:24:03.680 Reset Timeout: 15000 ms 00:24:03.680 Doorbell Stride: 4 bytes 00:24:03.680 NVM Subsystem Reset: Not Supported 00:24:03.680 Command Sets Supported 00:24:03.680 NVM Command Set: Supported 00:24:03.680 Boot Partition: Not Supported 00:24:03.680 Memory Page Size Minimum: 4096 bytes 00:24:03.680 Memory Page Size Maximum: 4096 bytes 00:24:03.680 Persistent Memory Region: Not Supported 00:24:03.680 Optional Asynchronous Events Supported 00:24:03.680 Namespace Attribute Notices: Not Supported 00:24:03.680 Firmware Activation Notices: Not Supported 00:24:03.680 ANA Change Notices: Not Supported 00:24:03.680 PLE Aggregate Log Change Notices: Not Supported 00:24:03.680 LBA Status Info Alert Notices: Not Supported 00:24:03.680 EGE Aggregate Log Change Notices: Not Supported 00:24:03.680 Normal NVM Subsystem Shutdown event: Not Supported 00:24:03.680 Zone Descriptor Change Notices: Not Supported 00:24:03.680 Discovery Log Change Notices: Supported 00:24:03.680 Controller Attributes 00:24:03.680 128-bit Host Identifier: Not Supported 00:24:03.680 Non-Operational Permissive Mode: Not Supported 00:24:03.680 NVM Sets: Not Supported 00:24:03.680 Read Recovery Levels: Not Supported 00:24:03.680 Endurance Groups: Not Supported 00:24:03.680 Predictable Latency Mode: Not Supported 00:24:03.680 Traffic Based Keep ALive: Not Supported 00:24:03.680 Namespace Granularity: Not Supported 00:24:03.680 SQ Associations: Not Supported 00:24:03.680 UUID List: Not Supported 00:24:03.680 Multi-Domain Subsystem: Not Supported 00:24:03.680 Fixed Capacity Management: Not Supported 00:24:03.680 Variable Capacity Management: Not Supported 00:24:03.680 Delete Endurance Group: Not Supported 00:24:03.680 Delete NVM Set: Not Supported 00:24:03.680 Extended LBA Formats Supported: Not Supported 00:24:03.680 Flexible Data Placement Supported: Not Supported 00:24:03.680 00:24:03.680 Controller Memory Buffer Support 00:24:03.680 ================================ 00:24:03.680 Supported: No 00:24:03.680 00:24:03.680 Persistent Memory Region Support 00:24:03.680 ================================ 00:24:03.680 Supported: No 00:24:03.680 00:24:03.680 Admin Command Set Attributes 00:24:03.680 ============================ 00:24:03.680 Security Send/Receive: Not Supported 00:24:03.680 Format NVM: Not Supported 00:24:03.680 Firmware Activate/Download: Not Supported 00:24:03.680 Namespace Management: Not Supported 00:24:03.680 Device Self-Test: Not Supported 00:24:03.680 Directives: Not Supported 00:24:03.680 NVMe-MI: Not Supported 00:24:03.680 Virtualization Management: Not Supported 00:24:03.680 Doorbell Buffer Config: Not Supported 00:24:03.680 Get LBA Status Capability: Not Supported 00:24:03.680 Command & Feature Lockdown Capability: Not Supported 00:24:03.680 Abort Command Limit: 1 00:24:03.680 Async Event Request Limit: 4 00:24:03.680 Number of Firmware Slots: N/A 00:24:03.680 Firmware Slot 1 Read-Only: N/A 00:24:03.680 Firmware Activation Without Reset: N/A 00:24:03.680 Multiple Update Detection Support: N/A 00:24:03.680 Firmware Update Granularity: No Information Provided 00:24:03.680 Per-Namespace SMART Log: No 00:24:03.680 Asymmetric Namespace Access Log Page: Not Supported 00:24:03.680 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:03.680 Command Effects Log Page: Not Supported 00:24:03.680 Get Log Page Extended Data: Supported 00:24:03.680 Telemetry Log Pages: Not Supported 00:24:03.680 Persistent Event Log Pages: Not Supported 00:24:03.680 Supported Log Pages Log Page: May Support 00:24:03.680 Commands Supported & Effects Log Page: Not Supported 00:24:03.680 Feature Identifiers & Effects Log Page:May Support 00:24:03.680 NVMe-MI Commands & Effects Log Page: May Support 00:24:03.680 Data Area 4 for Telemetry Log: Not Supported 00:24:03.680 Error Log Page Entries Supported: 128 00:24:03.680 Keep Alive: Not Supported 00:24:03.680 00:24:03.680 NVM Command Set Attributes 00:24:03.680 ========================== 00:24:03.680 Submission Queue Entry Size 00:24:03.680 Max: 1 00:24:03.680 Min: 1 00:24:03.680 Completion Queue Entry Size 00:24:03.680 Max: 1 00:24:03.680 Min: 1 00:24:03.680 Number of Namespaces: 0 00:24:03.680 Compare Command: Not Supported 00:24:03.680 Write Uncorrectable Command: Not Supported 00:24:03.680 Dataset Management Command: Not Supported 00:24:03.680 Write Zeroes Command: Not Supported 00:24:03.680 Set Features Save Field: Not Supported 00:24:03.680 Reservations: Not Supported 00:24:03.680 Timestamp: Not Supported 00:24:03.680 Copy: Not Supported 00:24:03.680 Volatile Write Cache: Not Present 00:24:03.680 Atomic Write Unit (Normal): 1 00:24:03.680 Atomic Write Unit (PFail): 1 00:24:03.680 Atomic Compare & Write Unit: 1 00:24:03.680 Fused Compare & Write: Supported 00:24:03.680 Scatter-Gather List 00:24:03.680 SGL Command Set: Supported 00:24:03.680 SGL Keyed: Supported 00:24:03.680 SGL Bit Bucket Descriptor: Not Supported 00:24:03.680 SGL Metadata Pointer: Not Supported 00:24:03.680 Oversized SGL: Not Supported 00:24:03.680 SGL Metadata Address: Not Supported 00:24:03.680 SGL Offset: Supported 00:24:03.680 Transport SGL Data Block: Not Supported 00:24:03.680 Replay Protected Memory Block: Not Supported 00:24:03.680 00:24:03.680 Firmware Slot Information 00:24:03.680 ========================= 00:24:03.680 Active slot: 0 00:24:03.680 00:24:03.680 00:24:03.680 Error Log 00:24:03.680 ========= 00:24:03.680 00:24:03.680 Active Namespaces 00:24:03.680 ================= 00:24:03.681 Discovery Log Page 00:24:03.681 ================== 00:24:03.681 Generation Counter: 2 00:24:03.681 Number of Records: 2 00:24:03.681 Record Format: 0 00:24:03.681 00:24:03.681 Discovery Log Entry 0 00:24:03.681 ---------------------- 00:24:03.681 Transport Type: 3 (TCP) 00:24:03.681 Address Family: 1 (IPv4) 00:24:03.681 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:03.681 Entry Flags: 00:24:03.681 Duplicate Returned Information: 1 00:24:03.681 Explicit Persistent Connection Support for Discovery: 1 00:24:03.681 Transport Requirements: 00:24:03.681 Secure Channel: Not Required 00:24:03.681 Port ID: 0 (0x0000) 00:24:03.681 Controller ID: 65535 (0xffff) 00:24:03.681 Admin Max SQ Size: 128 00:24:03.681 Transport Service Identifier: 4420 00:24:03.681 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:03.681 Transport Address: 10.0.0.2 00:24:03.681 Discovery Log Entry 1 00:24:03.681 ---------------------- 00:24:03.681 Transport Type: 3 (TCP) 00:24:03.681 Address Family: 1 (IPv4) 00:24:03.681 Subsystem Type: 2 (NVM Subsystem) 00:24:03.681 Entry Flags: 00:24:03.681 Duplicate Returned Information: 0 00:24:03.681 Explicit Persistent Connection Support for Discovery: 0 00:24:03.681 Transport Requirements: 00:24:03.681 Secure Channel: Not Required 00:24:03.681 Port ID: 0 (0x0000) 00:24:03.681 Controller ID: 65535 (0xffff) 00:24:03.681 Admin Max SQ Size: 128 00:24:03.681 Transport Service Identifier: 4420 00:24:03.681 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:03.681 Transport Address: 10.0.0.2 [2024-11-20 07:38:21.744078] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:03.681 [2024-11-20 07:38:21.744091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc100) on tqpair=0x1d9a550 00:24:03.681 [2024-11-20 07:38:21.744099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.681 [2024-11-20 07:38:21.744104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc280) on tqpair=0x1d9a550 00:24:03.681 [2024-11-20 07:38:21.744109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.681 [2024-11-20 07:38:21.744114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc400) on tqpair=0x1d9a550 00:24:03.681 [2024-11-20 07:38:21.744119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.681 [2024-11-20 07:38:21.744124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc580) on tqpair=0x1d9a550 00:24:03.681 [2024-11-20 07:38:21.744128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.681 [2024-11-20 07:38:21.744141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.744145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.744149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d9a550) 00:24:03.681 [2024-11-20 07:38:21.744157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.681 [2024-11-20 07:38:21.744174] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc580, cid 3, qid 0 00:24:03.681 [2024-11-20 07:38:21.744386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.681 [2024-11-20 07:38:21.744392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.681 [2024-11-20 07:38:21.744396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.744400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc580) on tqpair=0x1d9a550 00:24:03.681 [2024-11-20 07:38:21.744410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.744414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.744417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d9a550) 00:24:03.681 [2024-11-20 07:38:21.744424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.681 [2024-11-20 07:38:21.744438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc580, cid 3, qid 0 00:24:03.681 [2024-11-20 07:38:21.744629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.681 [2024-11-20 07:38:21.744635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.681 [2024-11-20 07:38:21.744639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.744643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc580) on tqpair=0x1d9a550 00:24:03.681 [2024-11-20 07:38:21.744648] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:03.681 [2024-11-20 07:38:21.744653] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:03.681 [2024-11-20 07:38:21.744664] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.744668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.744672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d9a550) 00:24:03.681 [2024-11-20 07:38:21.744679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.681 [2024-11-20 07:38:21.744690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc580, cid 3, qid 0 00:24:03.681 [2024-11-20 07:38:21.744943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.681 [2024-11-20 07:38:21.744950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.681 [2024-11-20 07:38:21.744954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.744958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc580) on tqpair=0x1d9a550 00:24:03.681 [2024-11-20 07:38:21.744970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.744974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.744977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d9a550) 00:24:03.681 [2024-11-20 07:38:21.744984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.681 [2024-11-20 07:38:21.744994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc580, cid 3, qid 0 00:24:03.681 [2024-11-20 07:38:21.745208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.681 [2024-11-20 07:38:21.745214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.681 [2024-11-20 07:38:21.745218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.745221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc580) on tqpair=0x1d9a550 00:24:03.681 [2024-11-20 07:38:21.745232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.745236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.745240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d9a550) 00:24:03.681 [2024-11-20 07:38:21.745246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.681 [2024-11-20 07:38:21.745256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc580, cid 3, qid 0 00:24:03.681 [2024-11-20 07:38:21.745431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.681 [2024-11-20 07:38:21.745437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.681 [2024-11-20 07:38:21.745443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.745447] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc580) on tqpair=0x1d9a550 00:24:03.681 [2024-11-20 07:38:21.745457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.745461] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.745464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d9a550) 00:24:03.681 [2024-11-20 07:38:21.745471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.681 [2024-11-20 07:38:21.745481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc580, cid 3, qid 0 00:24:03.681 [2024-11-20 07:38:21.745656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.681 [2024-11-20 07:38:21.745662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.681 [2024-11-20 07:38:21.745666] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.745670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc580) on tqpair=0x1d9a550 00:24:03.681 [2024-11-20 07:38:21.745679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.745683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.745687] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d9a550) 00:24:03.681 [2024-11-20 07:38:21.745694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.681 [2024-11-20 07:38:21.745704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dfc580, cid 3, qid 0 00:24:03.681 [2024-11-20 07:38:21.749753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.681 [2024-11-20 07:38:21.749762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.681 [2024-11-20 07:38:21.749766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.681 [2024-11-20 07:38:21.749770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dfc580) on tqpair=0x1d9a550 00:24:03.681 [2024-11-20 07:38:21.749778] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:24:03.681 00:24:03.681 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:03.681 [2024-11-20 07:38:21.797095] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:24:03.682 [2024-11-20 07:38:21.797138] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3494157 ] 00:24:03.682 [2024-11-20 07:38:21.853275] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:03.682 [2024-11-20 07:38:21.853341] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:03.682 [2024-11-20 07:38:21.853346] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:03.682 [2024-11-20 07:38:21.853364] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:03.682 [2024-11-20 07:38:21.853376] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:03.682 [2024-11-20 07:38:21.853987] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:03.682 [2024-11-20 07:38:21.854022] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x174a550 0 00:24:03.682 [2024-11-20 07:38:21.859771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:03.682 [2024-11-20 07:38:21.859788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:03.682 [2024-11-20 07:38:21.859792] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:03.682 [2024-11-20 07:38:21.859796] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:03.682 [2024-11-20 07:38:21.859833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.859839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.859843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174a550) 00:24:03.682 [2024-11-20 07:38:21.859856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:03.682 [2024-11-20 07:38:21.859877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac100, cid 0, qid 0 00:24:03.682 [2024-11-20 07:38:21.870758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.682 [2024-11-20 07:38:21.870772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.682 [2024-11-20 07:38:21.870776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.870780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac100) on tqpair=0x174a550 00:24:03.682 [2024-11-20 07:38:21.870794] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:03.682 [2024-11-20 07:38:21.870804] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:03.682 [2024-11-20 07:38:21.870810] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:03.682 [2024-11-20 07:38:21.870824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.870829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.870832] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174a550) 00:24:03.682 [2024-11-20 07:38:21.870841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.682 [2024-11-20 07:38:21.870857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac100, cid 0, qid 0 00:24:03.682 [2024-11-20 07:38:21.871086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.682 [2024-11-20 07:38:21.871096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.682 [2024-11-20 07:38:21.871100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.871104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac100) on tqpair=0x174a550 00:24:03.682 [2024-11-20 07:38:21.871109] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:03.682 [2024-11-20 07:38:21.871117] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:03.682 [2024-11-20 07:38:21.871124] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.871127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.871131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174a550) 00:24:03.682 [2024-11-20 07:38:21.871140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.682 [2024-11-20 07:38:21.871152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac100, cid 0, qid 0 00:24:03.682 [2024-11-20 07:38:21.871372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.682 [2024-11-20 07:38:21.871379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.682 [2024-11-20 07:38:21.871382] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.871391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac100) on tqpair=0x174a550 00:24:03.682 [2024-11-20 07:38:21.871396] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:03.682 [2024-11-20 07:38:21.871405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:03.682 [2024-11-20 07:38:21.871412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.871417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.871424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174a550) 00:24:03.682 [2024-11-20 07:38:21.871431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.682 [2024-11-20 07:38:21.871442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac100, cid 0, qid 0 00:24:03.682 [2024-11-20 07:38:21.871656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.682 [2024-11-20 07:38:21.871662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.682 [2024-11-20 07:38:21.871666] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.871670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac100) on tqpair=0x174a550 00:24:03.682 [2024-11-20 07:38:21.871675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:03.682 [2024-11-20 07:38:21.871684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.871688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.871692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174a550) 00:24:03.682 [2024-11-20 07:38:21.871700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.682 [2024-11-20 07:38:21.871713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac100, cid 0, qid 0 00:24:03.682 [2024-11-20 07:38:21.871939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.682 [2024-11-20 07:38:21.871946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.682 [2024-11-20 07:38:21.871949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.871953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac100) on tqpair=0x174a550 00:24:03.682 [2024-11-20 07:38:21.871958] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:03.682 [2024-11-20 07:38:21.871963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:03.682 [2024-11-20 07:38:21.871971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:03.682 [2024-11-20 07:38:21.872080] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:03.682 [2024-11-20 07:38:21.872085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:03.682 [2024-11-20 07:38:21.872094] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.872098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.872101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174a550) 00:24:03.682 [2024-11-20 07:38:21.872108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.682 [2024-11-20 07:38:21.872119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac100, cid 0, qid 0 00:24:03.682 [2024-11-20 07:38:21.872335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.682 [2024-11-20 07:38:21.872345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.682 [2024-11-20 07:38:21.872348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.872352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac100) on tqpair=0x174a550 00:24:03.682 [2024-11-20 07:38:21.872357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:03.682 [2024-11-20 07:38:21.872367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.872370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.872374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174a550) 00:24:03.682 [2024-11-20 07:38:21.872383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.682 [2024-11-20 07:38:21.872396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac100, cid 0, qid 0 00:24:03.682 [2024-11-20 07:38:21.872623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.682 [2024-11-20 07:38:21.872631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.682 [2024-11-20 07:38:21.872634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.872638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac100) on tqpair=0x174a550 00:24:03.682 [2024-11-20 07:38:21.872642] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:03.682 [2024-11-20 07:38:21.872647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:03.682 [2024-11-20 07:38:21.872655] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:03.682 [2024-11-20 07:38:21.872667] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:03.682 [2024-11-20 07:38:21.872681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.682 [2024-11-20 07:38:21.872684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174a550) 00:24:03.682 [2024-11-20 07:38:21.872692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.682 [2024-11-20 07:38:21.872702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac100, cid 0, qid 0 00:24:03.682 [2024-11-20 07:38:21.872935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.682 [2024-11-20 07:38:21.872942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.683 [2024-11-20 07:38:21.872946] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.872950] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x174a550): datao=0, datal=4096, cccid=0 00:24:03.683 [2024-11-20 07:38:21.872955] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17ac100) on tqpair(0x174a550): expected_datao=0, payload_size=4096 00:24:03.683 [2024-11-20 07:38:21.872959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.872967] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.872971] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.873110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.683 [2024-11-20 07:38:21.873117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.683 [2024-11-20 07:38:21.873120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.873124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac100) on tqpair=0x174a550 00:24:03.683 [2024-11-20 07:38:21.873132] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:03.683 [2024-11-20 07:38:21.873141] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:03.683 [2024-11-20 07:38:21.873145] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:03.683 [2024-11-20 07:38:21.873153] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:03.683 [2024-11-20 07:38:21.873161] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:03.683 [2024-11-20 07:38:21.873166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:03.683 [2024-11-20 07:38:21.873177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:03.683 [2024-11-20 07:38:21.873184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.873188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.873191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174a550) 00:24:03.683 [2024-11-20 07:38:21.873199] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:03.683 [2024-11-20 07:38:21.873214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac100, cid 0, qid 0 00:24:03.683 [2024-11-20 07:38:21.873402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.683 [2024-11-20 07:38:21.873411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.683 [2024-11-20 07:38:21.873417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.873421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac100) on tqpair=0x174a550 00:24:03.683 [2024-11-20 07:38:21.873429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.873432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.873436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x174a550) 00:24:03.683 [2024-11-20 07:38:21.873442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.683 [2024-11-20 07:38:21.873450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.873456] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.873460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x174a550) 00:24:03.683 [2024-11-20 07:38:21.873466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.683 [2024-11-20 07:38:21.873472] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.873476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.873479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x174a550) 00:24:03.683 [2024-11-20 07:38:21.873485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.683 [2024-11-20 07:38:21.873491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.873495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.873498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x174a550) 00:24:03.683 [2024-11-20 07:38:21.873504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.683 [2024-11-20 07:38:21.873509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:03.683 [2024-11-20 07:38:21.873518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:03.683 [2024-11-20 07:38:21.873531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.873535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x174a550) 00:24:03.683 [2024-11-20 07:38:21.873542] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.683 [2024-11-20 07:38:21.873554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac100, cid 0, qid 0 00:24:03.683 [2024-11-20 07:38:21.873559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac280, cid 1, qid 0 00:24:03.683 [2024-11-20 07:38:21.873564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac400, cid 2, qid 0 00:24:03.683 [2024-11-20 07:38:21.873569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac580, cid 3, qid 0 00:24:03.683 [2024-11-20 07:38:21.873574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac700, cid 4, qid 0 00:24:03.683 [2024-11-20 07:38:21.873817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.683 [2024-11-20 07:38:21.873824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.683 [2024-11-20 07:38:21.873828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.873832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac700) on tqpair=0x174a550 00:24:03.683 [2024-11-20 07:38:21.873839] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:03.683 [2024-11-20 07:38:21.873845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:03.683 [2024-11-20 07:38:21.873854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:03.683 [2024-11-20 07:38:21.873862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:03.683 [2024-11-20 07:38:21.873872] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.873876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.873879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x174a550) 00:24:03.683 [2024-11-20 07:38:21.873886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:03.683 [2024-11-20 07:38:21.873897] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac700, cid 4, qid 0 00:24:03.683 [2024-11-20 07:38:21.874114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.683 [2024-11-20 07:38:21.874121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.683 [2024-11-20 07:38:21.874124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.874128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac700) on tqpair=0x174a550 00:24:03.683 [2024-11-20 07:38:21.874198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:03.683 [2024-11-20 07:38:21.874209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:03.683 [2024-11-20 07:38:21.874221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.874224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x174a550) 00:24:03.683 [2024-11-20 07:38:21.874231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.683 [2024-11-20 07:38:21.874242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac700, cid 4, qid 0 00:24:03.683 [2024-11-20 07:38:21.874489] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.683 [2024-11-20 07:38:21.874502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.683 [2024-11-20 07:38:21.874506] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.874510] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x174a550): datao=0, datal=4096, cccid=4 00:24:03.683 [2024-11-20 07:38:21.874515] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17ac700) on tqpair(0x174a550): expected_datao=0, payload_size=4096 00:24:03.683 [2024-11-20 07:38:21.874520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.874527] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.874535] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.874669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.683 [2024-11-20 07:38:21.874675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.683 [2024-11-20 07:38:21.874679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.874683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac700) on tqpair=0x174a550 00:24:03.683 [2024-11-20 07:38:21.874694] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:03.683 [2024-11-20 07:38:21.874715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:03.683 [2024-11-20 07:38:21.874727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:03.683 [2024-11-20 07:38:21.874734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.683 [2024-11-20 07:38:21.874738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x174a550) 00:24:03.683 [2024-11-20 07:38:21.878755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.683 [2024-11-20 07:38:21.878772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac700, cid 4, qid 0 00:24:03.683 [2024-11-20 07:38:21.879014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.683 [2024-11-20 07:38:21.879021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.683 [2024-11-20 07:38:21.879024] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.879028] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x174a550): datao=0, datal=4096, cccid=4 00:24:03.684 [2024-11-20 07:38:21.879032] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17ac700) on tqpair(0x174a550): expected_datao=0, payload_size=4096 00:24:03.684 [2024-11-20 07:38:21.879037] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.879044] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.879047] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.879200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.684 [2024-11-20 07:38:21.879208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.684 [2024-11-20 07:38:21.879211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.879215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac700) on tqpair=0x174a550 00:24:03.684 [2024-11-20 07:38:21.879231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:03.684 [2024-11-20 07:38:21.879241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:03.684 [2024-11-20 07:38:21.879252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.879257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x174a550) 00:24:03.684 [2024-11-20 07:38:21.879266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.684 [2024-11-20 07:38:21.879277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac700, cid 4, qid 0 00:24:03.684 [2024-11-20 07:38:21.879481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.684 [2024-11-20 07:38:21.879488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.684 [2024-11-20 07:38:21.879492] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.879496] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x174a550): datao=0, datal=4096, cccid=4 00:24:03.684 [2024-11-20 07:38:21.879500] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17ac700) on tqpair(0x174a550): expected_datao=0, payload_size=4096 00:24:03.684 [2024-11-20 07:38:21.879504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.879511] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.879516] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.879675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.684 [2024-11-20 07:38:21.879683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.684 [2024-11-20 07:38:21.879689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.879693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac700) on tqpair=0x174a550 00:24:03.684 [2024-11-20 07:38:21.879702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:03.684 [2024-11-20 07:38:21.879710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:03.684 [2024-11-20 07:38:21.879720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:03.684 [2024-11-20 07:38:21.879726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:03.684 [2024-11-20 07:38:21.879733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:03.684 [2024-11-20 07:38:21.879742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:03.684 [2024-11-20 07:38:21.879755] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:03.684 [2024-11-20 07:38:21.879759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:03.684 [2024-11-20 07:38:21.879765] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:03.684 [2024-11-20 07:38:21.879782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.879786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x174a550) 00:24:03.684 [2024-11-20 07:38:21.879792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.684 [2024-11-20 07:38:21.879800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.879803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.879807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x174a550) 00:24:03.684 [2024-11-20 07:38:21.879813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.684 [2024-11-20 07:38:21.879828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac700, cid 4, qid 0 00:24:03.684 [2024-11-20 07:38:21.879833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac880, cid 5, qid 0 00:24:03.684 [2024-11-20 07:38:21.880072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.684 [2024-11-20 07:38:21.880080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.684 [2024-11-20 07:38:21.880084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.880088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac700) on tqpair=0x174a550 00:24:03.684 [2024-11-20 07:38:21.880094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.684 [2024-11-20 07:38:21.880100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.684 [2024-11-20 07:38:21.880104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.880108] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac880) on tqpair=0x174a550 00:24:03.684 [2024-11-20 07:38:21.880121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.880126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x174a550) 00:24:03.684 [2024-11-20 07:38:21.880132] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.684 [2024-11-20 07:38:21.880143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac880, cid 5, qid 0 00:24:03.684 [2024-11-20 07:38:21.880363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.684 [2024-11-20 07:38:21.880370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.684 [2024-11-20 07:38:21.880373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.880377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac880) on tqpair=0x174a550 00:24:03.684 [2024-11-20 07:38:21.880386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.880390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x174a550) 00:24:03.684 [2024-11-20 07:38:21.880397] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.684 [2024-11-20 07:38:21.880408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac880, cid 5, qid 0 00:24:03.684 [2024-11-20 07:38:21.880604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.684 [2024-11-20 07:38:21.880612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.684 [2024-11-20 07:38:21.880616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.880620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac880) on tqpair=0x174a550 00:24:03.684 [2024-11-20 07:38:21.880629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.880633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x174a550) 00:24:03.684 [2024-11-20 07:38:21.880642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.684 [2024-11-20 07:38:21.880656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac880, cid 5, qid 0 00:24:03.684 [2024-11-20 07:38:21.880854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.684 [2024-11-20 07:38:21.880862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.684 [2024-11-20 07:38:21.880866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.880870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac880) on tqpair=0x174a550 00:24:03.684 [2024-11-20 07:38:21.880888] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.880893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x174a550) 00:24:03.684 [2024-11-20 07:38:21.880900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.684 [2024-11-20 07:38:21.880917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.684 [2024-11-20 07:38:21.880921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x174a550) 00:24:03.685 [2024-11-20 07:38:21.880927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.685 [2024-11-20 07:38:21.880935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.685 [2024-11-20 07:38:21.880939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x174a550) 00:24:03.685 [2024-11-20 07:38:21.880945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.685 [2024-11-20 07:38:21.880956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.685 [2024-11-20 07:38:21.880960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x174a550) 00:24:03.685 [2024-11-20 07:38:21.880966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.685 [2024-11-20 07:38:21.880978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac880, cid 5, qid 0 00:24:03.685 [2024-11-20 07:38:21.880984] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac700, cid 4, qid 0 00:24:03.685 [2024-11-20 07:38:21.880989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17aca00, cid 6, qid 0 00:24:03.685 [2024-11-20 07:38:21.880993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17acb80, cid 7, qid 0 00:24:03.948 [2024-11-20 07:38:21.881310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.948 [2024-11-20 07:38:21.881323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.948 [2024-11-20 07:38:21.881329] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.948 [2024-11-20 07:38:21.881336] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x174a550): datao=0, datal=8192, cccid=5 00:24:03.948 [2024-11-20 07:38:21.881342] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17ac880) on tqpair(0x174a550): expected_datao=0, payload_size=8192 00:24:03.948 [2024-11-20 07:38:21.881346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.948 [2024-11-20 07:38:21.881407] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.948 [2024-11-20 07:38:21.881415] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.948 [2024-11-20 07:38:21.881424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.948 [2024-11-20 07:38:21.881432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.948 [2024-11-20 07:38:21.881435] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.948 [2024-11-20 07:38:21.881439] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x174a550): datao=0, datal=512, cccid=4 00:24:03.948 [2024-11-20 07:38:21.881443] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17ac700) on tqpair(0x174a550): expected_datao=0, payload_size=512 00:24:03.948 [2024-11-20 07:38:21.881448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.948 [2024-11-20 07:38:21.881455] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.948 [2024-11-20 07:38:21.881458] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.948 [2024-11-20 07:38:21.881464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.948 [2024-11-20 07:38:21.881470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.948 [2024-11-20 07:38:21.881473] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.948 [2024-11-20 07:38:21.881477] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x174a550): datao=0, datal=512, cccid=6 00:24:03.948 [2024-11-20 07:38:21.881481] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17aca00) on tqpair(0x174a550): expected_datao=0, payload_size=512 00:24:03.948 [2024-11-20 07:38:21.881485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.948 [2024-11-20 07:38:21.881494] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.948 [2024-11-20 07:38:21.881498] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.948 [2024-11-20 07:38:21.881504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.949 [2024-11-20 07:38:21.881511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.949 [2024-11-20 07:38:21.881518] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.949 [2024-11-20 07:38:21.881522] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x174a550): datao=0, datal=4096, cccid=7 00:24:03.949 [2024-11-20 07:38:21.881526] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17acb80) on tqpair(0x174a550): expected_datao=0, payload_size=4096 00:24:03.949 [2024-11-20 07:38:21.881531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.949 [2024-11-20 07:38:21.881543] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.949 [2024-11-20 07:38:21.881546] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.949 [2024-11-20 07:38:21.881554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.949 [2024-11-20 07:38:21.881559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.949 [2024-11-20 07:38:21.881563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.949 [2024-11-20 07:38:21.881567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac880) on tqpair=0x174a550 00:24:03.949 [2024-11-20 07:38:21.881579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.949 [2024-11-20 07:38:21.881585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.949 [2024-11-20 07:38:21.881588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.949 [2024-11-20 07:38:21.881592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac700) on tqpair=0x174a550 00:24:03.949 [2024-11-20 07:38:21.881603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.949 [2024-11-20 07:38:21.881609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.949 [2024-11-20 07:38:21.881613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.949 [2024-11-20 07:38:21.881616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17aca00) on tqpair=0x174a550 00:24:03.949 [2024-11-20 07:38:21.881623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.949 [2024-11-20 07:38:21.881629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.949 [2024-11-20 07:38:21.881633] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.949 [2024-11-20 07:38:21.881637] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17acb80) on tqpair=0x174a550 00:24:03.949 ===================================================== 00:24:03.949 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:03.949 ===================================================== 00:24:03.949 Controller Capabilities/Features 00:24:03.949 ================================ 00:24:03.949 Vendor ID: 8086 00:24:03.949 Subsystem Vendor ID: 8086 00:24:03.949 Serial Number: SPDK00000000000001 00:24:03.949 Model Number: SPDK bdev Controller 00:24:03.949 Firmware Version: 25.01 00:24:03.949 Recommended Arb Burst: 6 00:24:03.949 IEEE OUI Identifier: e4 d2 5c 00:24:03.949 Multi-path I/O 00:24:03.949 May have multiple subsystem ports: Yes 00:24:03.949 May have multiple controllers: Yes 00:24:03.949 Associated with SR-IOV VF: No 00:24:03.949 Max Data Transfer Size: 131072 00:24:03.949 Max Number of Namespaces: 32 00:24:03.949 Max Number of I/O Queues: 127 00:24:03.949 NVMe Specification Version (VS): 1.3 00:24:03.949 NVMe Specification Version (Identify): 1.3 00:24:03.949 Maximum Queue Entries: 128 00:24:03.949 Contiguous Queues Required: Yes 00:24:03.949 Arbitration Mechanisms Supported 00:24:03.949 Weighted Round Robin: Not Supported 00:24:03.949 Vendor Specific: Not Supported 00:24:03.949 Reset Timeout: 15000 ms 00:24:03.949 Doorbell Stride: 4 bytes 00:24:03.949 NVM Subsystem Reset: Not Supported 00:24:03.949 Command Sets Supported 00:24:03.949 NVM Command Set: Supported 00:24:03.949 Boot Partition: Not Supported 00:24:03.949 Memory Page Size Minimum: 4096 bytes 00:24:03.949 Memory Page Size Maximum: 4096 bytes 00:24:03.949 Persistent Memory Region: Not Supported 00:24:03.949 Optional Asynchronous Events Supported 00:24:03.949 Namespace Attribute Notices: Supported 00:24:03.949 Firmware Activation Notices: Not Supported 00:24:03.949 ANA Change Notices: Not Supported 00:24:03.949 PLE Aggregate Log Change Notices: Not Supported 00:24:03.949 LBA Status Info Alert Notices: Not Supported 00:24:03.949 EGE Aggregate Log Change Notices: Not Supported 00:24:03.949 Normal NVM Subsystem Shutdown event: Not Supported 00:24:03.949 Zone Descriptor Change Notices: Not Supported 00:24:03.949 Discovery Log Change Notices: Not Supported 00:24:03.949 Controller Attributes 00:24:03.949 128-bit Host Identifier: Supported 00:24:03.949 Non-Operational Permissive Mode: Not Supported 00:24:03.949 NVM Sets: Not Supported 00:24:03.949 Read Recovery Levels: Not Supported 00:24:03.949 Endurance Groups: Not Supported 00:24:03.949 Predictable Latency Mode: Not Supported 00:24:03.949 Traffic Based Keep ALive: Not Supported 00:24:03.949 Namespace Granularity: Not Supported 00:24:03.949 SQ Associations: Not Supported 00:24:03.949 UUID List: Not Supported 00:24:03.949 Multi-Domain Subsystem: Not Supported 00:24:03.949 Fixed Capacity Management: Not Supported 00:24:03.949 Variable Capacity Management: Not Supported 00:24:03.949 Delete Endurance Group: Not Supported 00:24:03.949 Delete NVM Set: Not Supported 00:24:03.949 Extended LBA Formats Supported: Not Supported 00:24:03.949 Flexible Data Placement Supported: Not Supported 00:24:03.949 00:24:03.949 Controller Memory Buffer Support 00:24:03.949 ================================ 00:24:03.949 Supported: No 00:24:03.949 00:24:03.949 Persistent Memory Region Support 00:24:03.949 ================================ 00:24:03.949 Supported: No 00:24:03.949 00:24:03.949 Admin Command Set Attributes 00:24:03.949 ============================ 00:24:03.949 Security Send/Receive: Not Supported 00:24:03.949 Format NVM: Not Supported 00:24:03.949 Firmware Activate/Download: Not Supported 00:24:03.949 Namespace Management: Not Supported 00:24:03.949 Device Self-Test: Not Supported 00:24:03.949 Directives: Not Supported 00:24:03.949 NVMe-MI: Not Supported 00:24:03.949 Virtualization Management: Not Supported 00:24:03.949 Doorbell Buffer Config: Not Supported 00:24:03.949 Get LBA Status Capability: Not Supported 00:24:03.949 Command & Feature Lockdown Capability: Not Supported 00:24:03.949 Abort Command Limit: 4 00:24:03.949 Async Event Request Limit: 4 00:24:03.949 Number of Firmware Slots: N/A 00:24:03.949 Firmware Slot 1 Read-Only: N/A 00:24:03.949 Firmware Activation Without Reset: N/A 00:24:03.949 Multiple Update Detection Support: N/A 00:24:03.949 Firmware Update Granularity: No Information Provided 00:24:03.949 Per-Namespace SMART Log: No 00:24:03.949 Asymmetric Namespace Access Log Page: Not Supported 00:24:03.949 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:03.949 Command Effects Log Page: Supported 00:24:03.949 Get Log Page Extended Data: Supported 00:24:03.949 Telemetry Log Pages: Not Supported 00:24:03.949 Persistent Event Log Pages: Not Supported 00:24:03.949 Supported Log Pages Log Page: May Support 00:24:03.949 Commands Supported & Effects Log Page: Not Supported 00:24:03.949 Feature Identifiers & Effects Log Page:May Support 00:24:03.949 NVMe-MI Commands & Effects Log Page: May Support 00:24:03.949 Data Area 4 for Telemetry Log: Not Supported 00:24:03.949 Error Log Page Entries Supported: 128 00:24:03.949 Keep Alive: Supported 00:24:03.949 Keep Alive Granularity: 10000 ms 00:24:03.949 00:24:03.949 NVM Command Set Attributes 00:24:03.949 ========================== 00:24:03.949 Submission Queue Entry Size 00:24:03.949 Max: 64 00:24:03.949 Min: 64 00:24:03.949 Completion Queue Entry Size 00:24:03.949 Max: 16 00:24:03.949 Min: 16 00:24:03.949 Number of Namespaces: 32 00:24:03.949 Compare Command: Supported 00:24:03.949 Write Uncorrectable Command: Not Supported 00:24:03.949 Dataset Management Command: Supported 00:24:03.949 Write Zeroes Command: Supported 00:24:03.949 Set Features Save Field: Not Supported 00:24:03.949 Reservations: Supported 00:24:03.949 Timestamp: Not Supported 00:24:03.949 Copy: Supported 00:24:03.949 Volatile Write Cache: Present 00:24:03.949 Atomic Write Unit (Normal): 1 00:24:03.949 Atomic Write Unit (PFail): 1 00:24:03.949 Atomic Compare & Write Unit: 1 00:24:03.949 Fused Compare & Write: Supported 00:24:03.949 Scatter-Gather List 00:24:03.949 SGL Command Set: Supported 00:24:03.949 SGL Keyed: Supported 00:24:03.949 SGL Bit Bucket Descriptor: Not Supported 00:24:03.949 SGL Metadata Pointer: Not Supported 00:24:03.949 Oversized SGL: Not Supported 00:24:03.949 SGL Metadata Address: Not Supported 00:24:03.949 SGL Offset: Supported 00:24:03.949 Transport SGL Data Block: Not Supported 00:24:03.949 Replay Protected Memory Block: Not Supported 00:24:03.949 00:24:03.949 Firmware Slot Information 00:24:03.949 ========================= 00:24:03.949 Active slot: 1 00:24:03.949 Slot 1 Firmware Revision: 25.01 00:24:03.950 00:24:03.950 00:24:03.950 Commands Supported and Effects 00:24:03.950 ============================== 00:24:03.950 Admin Commands 00:24:03.950 -------------- 00:24:03.950 Get Log Page (02h): Supported 00:24:03.950 Identify (06h): Supported 00:24:03.950 Abort (08h): Supported 00:24:03.950 Set Features (09h): Supported 00:24:03.950 Get Features (0Ah): Supported 00:24:03.950 Asynchronous Event Request (0Ch): Supported 00:24:03.950 Keep Alive (18h): Supported 00:24:03.950 I/O Commands 00:24:03.950 ------------ 00:24:03.950 Flush (00h): Supported LBA-Change 00:24:03.950 Write (01h): Supported LBA-Change 00:24:03.950 Read (02h): Supported 00:24:03.950 Compare (05h): Supported 00:24:03.950 Write Zeroes (08h): Supported LBA-Change 00:24:03.950 Dataset Management (09h): Supported LBA-Change 00:24:03.950 Copy (19h): Supported LBA-Change 00:24:03.950 00:24:03.950 Error Log 00:24:03.950 ========= 00:24:03.950 00:24:03.950 Arbitration 00:24:03.950 =========== 00:24:03.950 Arbitration Burst: 1 00:24:03.950 00:24:03.950 Power Management 00:24:03.950 ================ 00:24:03.950 Number of Power States: 1 00:24:03.950 Current Power State: Power State #0 00:24:03.950 Power State #0: 00:24:03.950 Max Power: 0.00 W 00:24:03.950 Non-Operational State: Operational 00:24:03.950 Entry Latency: Not Reported 00:24:03.950 Exit Latency: Not Reported 00:24:03.950 Relative Read Throughput: 0 00:24:03.950 Relative Read Latency: 0 00:24:03.950 Relative Write Throughput: 0 00:24:03.950 Relative Write Latency: 0 00:24:03.950 Idle Power: Not Reported 00:24:03.950 Active Power: Not Reported 00:24:03.950 Non-Operational Permissive Mode: Not Supported 00:24:03.950 00:24:03.950 Health Information 00:24:03.950 ================== 00:24:03.950 Critical Warnings: 00:24:03.950 Available Spare Space: OK 00:24:03.950 Temperature: OK 00:24:03.950 Device Reliability: OK 00:24:03.950 Read Only: No 00:24:03.950 Volatile Memory Backup: OK 00:24:03.950 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:03.950 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:03.950 Available Spare: 0% 00:24:03.950 Available Spare Threshold: 0% 00:24:03.950 Life Percentage Used:[2024-11-20 07:38:21.881750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.950 [2024-11-20 07:38:21.881756] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x174a550) 00:24:03.950 [2024-11-20 07:38:21.881763] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.950 [2024-11-20 07:38:21.881774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17acb80, cid 7, qid 0 00:24:03.950 [2024-11-20 07:38:21.881963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.950 [2024-11-20 07:38:21.881970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.950 [2024-11-20 07:38:21.881973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.950 [2024-11-20 07:38:21.881977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17acb80) on tqpair=0x174a550 00:24:03.950 [2024-11-20 07:38:21.882013] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:03.950 [2024-11-20 07:38:21.882023] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac100) on tqpair=0x174a550 00:24:03.950 [2024-11-20 07:38:21.882030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.950 [2024-11-20 07:38:21.882040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac280) on tqpair=0x174a550 00:24:03.950 [2024-11-20 07:38:21.882045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.950 [2024-11-20 07:38:21.882050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac400) on tqpair=0x174a550 00:24:03.950 [2024-11-20 07:38:21.882055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.950 [2024-11-20 07:38:21.882063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac580) on tqpair=0x174a550 00:24:03.950 [2024-11-20 07:38:21.882068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.950 [2024-11-20 07:38:21.882077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.950 [2024-11-20 07:38:21.882080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.950 [2024-11-20 07:38:21.882084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x174a550) 00:24:03.950 [2024-11-20 07:38:21.882091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.950 [2024-11-20 07:38:21.882103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac580, cid 3, qid 0 00:24:03.950 [2024-11-20 07:38:21.882316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.950 [2024-11-20 07:38:21.882323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.950 [2024-11-20 07:38:21.882327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.950 [2024-11-20 07:38:21.882330] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac580) on tqpair=0x174a550 00:24:03.950 [2024-11-20 07:38:21.882337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.950 [2024-11-20 07:38:21.882341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.950 [2024-11-20 07:38:21.882345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x174a550) 00:24:03.950 [2024-11-20 07:38:21.882352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.950 [2024-11-20 07:38:21.882368] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac580, cid 3, qid 0 00:24:03.950 [2024-11-20 07:38:21.882561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.950 [2024-11-20 07:38:21.882568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.950 [2024-11-20 07:38:21.882571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.950 [2024-11-20 07:38:21.882575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac580) on tqpair=0x174a550 00:24:03.950 [2024-11-20 07:38:21.882580] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:03.950 [2024-11-20 07:38:21.882585] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:03.950 [2024-11-20 07:38:21.882595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.950 [2024-11-20 07:38:21.882599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.950 [2024-11-20 07:38:21.882603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x174a550) 00:24:03.950 [2024-11-20 07:38:21.882614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.950 [2024-11-20 07:38:21.882625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17ac580, cid 3, qid 0 00:24:03.950 [2024-11-20 07:38:21.886758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.950 [2024-11-20 07:38:21.886769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.950 [2024-11-20 07:38:21.886773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.950 [2024-11-20 07:38:21.886777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17ac580) on tqpair=0x174a550 00:24:03.950 [2024-11-20 07:38:21.886789] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:24:03.950 0% 00:24:03.950 Data Units Read: 0 00:24:03.950 Data Units Written: 0 00:24:03.950 Host Read Commands: 0 00:24:03.950 Host Write Commands: 0 00:24:03.950 Controller Busy Time: 0 minutes 00:24:03.950 Power Cycles: 0 00:24:03.950 Power On Hours: 0 hours 00:24:03.950 Unsafe Shutdowns: 0 00:24:03.950 Unrecoverable Media Errors: 0 00:24:03.950 Lifetime Error Log Entries: 0 00:24:03.950 Warning Temperature Time: 0 minutes 00:24:03.950 Critical Temperature Time: 0 minutes 00:24:03.950 00:24:03.950 Number of Queues 00:24:03.950 ================ 00:24:03.950 Number of I/O Submission Queues: 127 00:24:03.950 Number of I/O Completion Queues: 127 00:24:03.950 00:24:03.950 Active Namespaces 00:24:03.950 ================= 00:24:03.950 Namespace ID:1 00:24:03.950 Error Recovery Timeout: Unlimited 00:24:03.950 Command Set Identifier: NVM (00h) 00:24:03.950 Deallocate: Supported 00:24:03.950 Deallocated/Unwritten Error: Not Supported 00:24:03.950 Deallocated Read Value: Unknown 00:24:03.950 Deallocate in Write Zeroes: Not Supported 00:24:03.950 Deallocated Guard Field: 0xFFFF 00:24:03.951 Flush: Supported 00:24:03.951 Reservation: Supported 00:24:03.951 Namespace Sharing Capabilities: Multiple Controllers 00:24:03.951 Size (in LBAs): 131072 (0GiB) 00:24:03.951 Capacity (in LBAs): 131072 (0GiB) 00:24:03.951 Utilization (in LBAs): 131072 (0GiB) 00:24:03.951 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:03.951 EUI64: ABCDEF0123456789 00:24:03.951 UUID: 318a4173-f6dc-4f2d-a294-a7ae43a79416 00:24:03.951 Thin Provisioning: Not Supported 00:24:03.951 Per-NS Atomic Units: Yes 00:24:03.951 Atomic Boundary Size (Normal): 0 00:24:03.951 Atomic Boundary Size (PFail): 0 00:24:03.951 Atomic Boundary Offset: 0 00:24:03.951 Maximum Single Source Range Length: 65535 00:24:03.951 Maximum Copy Length: 65535 00:24:03.951 Maximum Source Range Count: 1 00:24:03.951 NGUID/EUI64 Never Reused: No 00:24:03.951 Namespace Write Protected: No 00:24:03.951 Number of LBA Formats: 1 00:24:03.951 Current LBA Format: LBA Format #00 00:24:03.951 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:03.951 00:24:03.951 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:03.951 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.951 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.951 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.951 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.951 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:03.951 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:03.951 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.951 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:03.951 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.951 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:03.951 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.951 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.951 rmmod nvme_tcp 00:24:03.951 rmmod nvme_fabrics 00:24:03.951 rmmod nvme_keyring 00:24:03.951 07:38:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.951 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:03.951 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:03.951 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3493973 ']' 00:24:03.951 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3493973 00:24:03.951 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 3493973 ']' 00:24:03.951 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 3493973 00:24:03.951 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:24:03.951 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:03.951 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3493973 00:24:03.951 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:03.951 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:03.951 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3493973' 00:24:03.951 killing process with pid 3493973 00:24:03.951 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 3493973 00:24:03.951 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 3493973 00:24:04.211 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.211 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.211 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.211 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:04.211 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.211 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.211 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:04.211 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.211 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:04.211 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.211 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.211 07:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.757 07:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:06.757 00:24:06.757 real 0m11.834s 00:24:06.757 user 0m8.569s 00:24:06.757 sys 0m6.323s 00:24:06.757 07:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:06.757 07:38:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:06.757 ************************************ 00:24:06.757 END TEST nvmf_identify 00:24:06.757 ************************************ 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.758 ************************************ 00:24:06.758 START TEST nvmf_perf 00:24:06.758 ************************************ 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:06.758 * Looking for test storage... 00:24:06.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:06.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.758 --rc genhtml_branch_coverage=1 00:24:06.758 --rc genhtml_function_coverage=1 00:24:06.758 --rc genhtml_legend=1 00:24:06.758 --rc geninfo_all_blocks=1 00:24:06.758 --rc geninfo_unexecuted_blocks=1 00:24:06.758 00:24:06.758 ' 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:06.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.758 --rc genhtml_branch_coverage=1 00:24:06.758 --rc genhtml_function_coverage=1 00:24:06.758 --rc genhtml_legend=1 00:24:06.758 --rc geninfo_all_blocks=1 00:24:06.758 --rc geninfo_unexecuted_blocks=1 00:24:06.758 00:24:06.758 ' 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:06.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.758 --rc genhtml_branch_coverage=1 00:24:06.758 --rc genhtml_function_coverage=1 00:24:06.758 --rc genhtml_legend=1 00:24:06.758 --rc geninfo_all_blocks=1 00:24:06.758 --rc geninfo_unexecuted_blocks=1 00:24:06.758 00:24:06.758 ' 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:06.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.758 --rc genhtml_branch_coverage=1 00:24:06.758 --rc genhtml_function_coverage=1 00:24:06.758 --rc genhtml_legend=1 00:24:06.758 --rc geninfo_all_blocks=1 00:24:06.758 --rc geninfo_unexecuted_blocks=1 00:24:06.758 00:24:06.758 ' 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.758 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.759 07:38:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.904 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.904 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:14.904 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:14.904 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:14.904 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:14.904 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:14.904 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:14.905 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:14.905 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:14.905 Found net devices under 0000:31:00.0: cvl_0_0 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:14.905 Found net devices under 0000:31:00.1: cvl_0_1 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.905 07:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:14.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:24:14.905 00:24:14.905 --- 10.0.0.2 ping statistics --- 00:24:14.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.905 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:24:14.905 00:24:14.905 --- 10.0.0.1 ping statistics --- 00:24:14.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.905 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.905 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3498506 00:24:14.906 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3498506 00:24:14.906 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:14.906 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 3498506 ']' 00:24:14.906 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.906 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:14.906 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.906 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:14.906 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.906 [2024-11-20 07:38:32.422228] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:24:14.906 [2024-11-20 07:38:32.422298] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.906 [2024-11-20 07:38:32.513889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:14.906 [2024-11-20 07:38:32.561291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.906 [2024-11-20 07:38:32.561347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.906 [2024-11-20 07:38:32.561354] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.906 [2024-11-20 07:38:32.561360] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.906 [2024-11-20 07:38:32.561364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.906 [2024-11-20 07:38:32.566776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.906 [2024-11-20 07:38:32.566876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.906 [2024-11-20 07:38:32.567140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:14.906 [2024-11-20 07:38:32.567143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.906 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:14.906 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:24:14.906 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:14.906 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:14.906 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.906 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.906 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:14.906 07:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:15.211 07:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:15.211 07:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:15.472 07:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:15.472 07:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:15.472 07:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:15.472 07:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:15.472 07:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:15.472 07:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:15.472 07:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:15.733 [2024-11-20 07:38:33.826055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.733 07:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:15.993 07:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:15.993 07:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:16.253 07:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:16.253 07:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:16.514 07:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.514 [2024-11-20 07:38:34.629737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.514 07:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:16.775 07:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:16.775 07:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:16.775 07:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:16.775 07:38:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:18.159 Initializing NVMe Controllers 00:24:18.159 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:18.159 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:18.159 Initialization complete. Launching workers. 00:24:18.159 ======================================================== 00:24:18.159 Latency(us) 00:24:18.159 Device Information : IOPS MiB/s Average min max 00:24:18.159 PCIE (0000:65:00.0) NSID 1 from core 0: 78421.45 306.33 407.66 32.80 4999.28 00:24:18.159 ======================================================== 00:24:18.159 Total : 78421.45 306.33 407.66 32.80 4999.28 00:24:18.159 00:24:18.159 07:38:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:19.542 Initializing NVMe Controllers 00:24:19.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:19.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:19.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:19.543 Initialization complete. Launching workers. 00:24:19.543 ======================================================== 00:24:19.543 Latency(us) 00:24:19.543 Device Information : IOPS MiB/s Average min max 00:24:19.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 81.00 0.32 12539.92 168.31 45955.33 00:24:19.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15236.29 7853.86 48014.82 00:24:19.543 ======================================================== 00:24:19.543 Total : 147.00 0.57 13750.53 168.31 48014.82 00:24:19.543 00:24:19.543 07:38:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:20.929 Initializing NVMe Controllers 00:24:20.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:20.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:20.929 Initialization complete. Launching workers. 00:24:20.929 ======================================================== 00:24:20.929 Latency(us) 00:24:20.929 Device Information : IOPS MiB/s Average min max 00:24:20.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11792.81 46.07 2715.84 452.94 6301.87 00:24:20.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3866.94 15.11 8312.39 5372.89 16615.45 00:24:20.929 ======================================================== 00:24:20.929 Total : 15659.75 61.17 4097.82 452.94 16615.45 00:24:20.929 00:24:20.929 07:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:20.929 07:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:20.929 07:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.488 Initializing NVMe Controllers 00:24:23.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:23.488 Controller IO queue size 128, less than required. 00:24:23.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.488 Controller IO queue size 128, less than required. 00:24:23.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:23.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:23.488 Initialization complete. Launching workers. 00:24:23.488 ======================================================== 00:24:23.488 Latency(us) 00:24:23.488 Device Information : IOPS MiB/s Average min max 00:24:23.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1827.39 456.85 71370.99 45407.80 117132.51 00:24:23.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 593.33 148.33 220328.80 63278.34 355600.55 00:24:23.488 ======================================================== 00:24:23.488 Total : 2420.72 605.18 107881.16 45407.80 355600.55 00:24:23.488 00:24:23.488 07:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:23.488 No valid NVMe controllers or AIO or URING devices found 00:24:23.488 Initializing NVMe Controllers 00:24:23.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:23.488 Controller IO queue size 128, less than required. 00:24:23.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.488 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:23.488 Controller IO queue size 128, less than required. 00:24:23.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.488 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:23.488 WARNING: Some requested NVMe devices were skipped 00:24:23.488 07:38:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:26.033 Initializing NVMe Controllers 00:24:26.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.033 Controller IO queue size 128, less than required. 00:24:26.033 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:26.033 Controller IO queue size 128, less than required. 00:24:26.033 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:26.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:26.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:26.033 Initialization complete. Launching workers. 00:24:26.033 00:24:26.033 ==================== 00:24:26.033 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:26.033 TCP transport: 00:24:26.033 polls: 36552 00:24:26.033 idle_polls: 22324 00:24:26.033 sock_completions: 14228 00:24:26.033 nvme_completions: 6607 00:24:26.033 submitted_requests: 9900 00:24:26.033 queued_requests: 1 00:24:26.033 00:24:26.033 ==================== 00:24:26.033 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:26.033 TCP transport: 00:24:26.033 polls: 34768 00:24:26.033 idle_polls: 19993 00:24:26.033 sock_completions: 14775 00:24:26.033 nvme_completions: 8051 00:24:26.033 submitted_requests: 12120 00:24:26.033 queued_requests: 1 00:24:26.033 ======================================================== 00:24:26.033 Latency(us) 00:24:26.033 Device Information : IOPS MiB/s Average min max 00:24:26.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1651.44 412.86 80032.84 48523.83 136833.59 00:24:26.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2012.43 503.11 63907.66 30695.98 126524.54 00:24:26.033 ======================================================== 00:24:26.033 Total : 3663.87 915.97 71175.88 30695.98 136833.59 00:24:26.033 00:24:26.033 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:26.033 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:26.294 rmmod nvme_tcp 00:24:26.294 rmmod nvme_fabrics 00:24:26.294 rmmod nvme_keyring 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3498506 ']' 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3498506 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 3498506 ']' 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 3498506 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3498506 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3498506' 00:24:26.294 killing process with pid 3498506 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 3498506 00:24:26.294 07:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 3498506 00:24:28.839 07:38:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.839 07:38:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:28.839 07:38:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:28.839 07:38:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:28.839 07:38:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:28.839 07:38:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:28.839 07:38:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:28.839 07:38:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.839 07:38:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.839 07:38:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.839 07:38:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.839 07:38:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:30.752 00:24:30.752 real 0m24.083s 00:24:30.752 user 0m56.921s 00:24:30.752 sys 0m8.802s 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:30.752 ************************************ 00:24:30.752 END TEST nvmf_perf 00:24:30.752 ************************************ 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.752 ************************************ 00:24:30.752 START TEST nvmf_fio_host 00:24:30.752 ************************************ 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:30.752 * Looking for test storage... 00:24:30.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:30.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.752 --rc genhtml_branch_coverage=1 00:24:30.752 --rc genhtml_function_coverage=1 00:24:30.752 --rc genhtml_legend=1 00:24:30.752 --rc geninfo_all_blocks=1 00:24:30.752 --rc geninfo_unexecuted_blocks=1 00:24:30.752 00:24:30.752 ' 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:30.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.752 --rc genhtml_branch_coverage=1 00:24:30.752 --rc genhtml_function_coverage=1 00:24:30.752 --rc genhtml_legend=1 00:24:30.752 --rc geninfo_all_blocks=1 00:24:30.752 --rc geninfo_unexecuted_blocks=1 00:24:30.752 00:24:30.752 ' 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:30.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.752 --rc genhtml_branch_coverage=1 00:24:30.752 --rc genhtml_function_coverage=1 00:24:30.752 --rc genhtml_legend=1 00:24:30.752 --rc geninfo_all_blocks=1 00:24:30.752 --rc geninfo_unexecuted_blocks=1 00:24:30.752 00:24:30.752 ' 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:30.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.752 --rc genhtml_branch_coverage=1 00:24:30.752 --rc genhtml_function_coverage=1 00:24:30.752 --rc genhtml_legend=1 00:24:30.752 --rc geninfo_all_blocks=1 00:24:30.752 --rc geninfo_unexecuted_blocks=1 00:24:30.752 00:24:30.752 ' 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.752 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:30.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:30.753 07:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.892 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:38.893 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:38.893 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:38.893 Found net devices under 0000:31:00.0: cvl_0_0 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:38.893 Found net devices under 0000:31:00.1: cvl_0_1 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.893 07:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:24:38.893 00:24:38.893 --- 10.0.0.2 ping statistics --- 00:24:38.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.893 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:24:38.893 00:24:38.893 --- 10.0.0.1 ping statistics --- 00:24:38.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.893 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3505451 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3505451 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 3505451 ']' 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:38.893 07:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.893 [2024-11-20 07:38:56.358836] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:24:38.893 [2024-11-20 07:38:56.358908] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.893 [2024-11-20 07:38:56.466382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:38.893 [2024-11-20 07:38:56.511200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.893 [2024-11-20 07:38:56.511244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.893 [2024-11-20 07:38:56.511252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.893 [2024-11-20 07:38:56.511260] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.893 [2024-11-20 07:38:56.511266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.893 [2024-11-20 07:38:56.512917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.894 [2024-11-20 07:38:56.513054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.894 [2024-11-20 07:38:56.513331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.894 [2024-11-20 07:38:56.513332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.155 07:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:39.155 07:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:24:39.155 07:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:39.155 [2024-11-20 07:38:57.323336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.155 07:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:39.155 07:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:39.155 07:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.416 07:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:39.416 Malloc1 00:24:39.416 07:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:39.677 07:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:39.939 07:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.939 [2024-11-20 07:38:58.106665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.939 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:40.200 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:40.201 07:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:40.770 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:40.770 fio-3.35 00:24:40.770 Starting 1 thread 00:24:43.333 00:24:43.333 test: (groupid=0, jobs=1): err= 0: pid=3506138: Wed Nov 20 07:39:01 2024 00:24:43.333 read: IOPS=13.0k, BW=50.9MiB/s (53.3MB/s)(102MiB/2005msec) 00:24:43.333 slat (usec): min=2, max=279, avg= 2.18, stdev= 2.50 00:24:43.333 clat (usec): min=3329, max=9476, avg=5410.94, stdev=485.61 00:24:43.333 lat (usec): min=3363, max=9478, avg=5413.12, stdev=485.71 00:24:43.333 clat percentiles (usec): 00:24:43.333 | 1.00th=[ 4555], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5080], 00:24:43.333 | 30.00th=[ 5211], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5473], 00:24:43.333 | 70.00th=[ 5538], 80.00th=[ 5669], 90.00th=[ 5866], 95.00th=[ 6063], 00:24:43.333 | 99.00th=[ 7635], 99.50th=[ 7963], 99.90th=[ 8717], 99.95th=[ 8979], 00:24:43.333 | 99.99th=[ 9110] 00:24:43.333 bw ( KiB/s): min=49344, max=53136, per=100.00%, avg=52086.00, stdev=1832.98, samples=4 00:24:43.333 iops : min=12336, max=13284, avg=13021.50, stdev=458.24, samples=4 00:24:43.333 write: IOPS=13.0k, BW=50.8MiB/s (53.3MB/s)(102MiB/2005msec); 0 zone resets 00:24:43.334 slat (usec): min=2, max=268, avg= 2.23, stdev= 1.83 00:24:43.334 clat (usec): min=2912, max=8721, avg=4369.84, stdev=425.32 00:24:43.334 lat (usec): min=2930, max=8723, avg=4372.08, stdev=425.50 00:24:43.334 clat percentiles (usec): 00:24:43.334 | 1.00th=[ 3654], 5.00th=[ 3884], 10.00th=[ 3949], 20.00th=[ 4080], 00:24:43.334 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4424], 00:24:43.334 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 4883], 00:24:43.334 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 7504], 99.95th=[ 8094], 00:24:43.334 | 99.99th=[ 8717] 00:24:43.334 bw ( KiB/s): min=49832, max=52928, per=100.00%, avg=52062.00, stdev=1489.90, samples=4 00:24:43.334 iops : min=12458, max=13232, avg=13015.50, stdev=372.47, samples=4 00:24:43.334 lat (msec) : 4=6.31%, 10=93.69% 00:24:43.334 cpu : usr=77.50%, sys=21.36%, ctx=48, majf=0, minf=17 00:24:43.334 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:43.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:43.334 issued rwts: total=26105,26096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.334 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:43.334 00:24:43.334 Run status group 0 (all jobs): 00:24:43.334 READ: bw=50.9MiB/s (53.3MB/s), 50.9MiB/s-50.9MiB/s (53.3MB/s-53.3MB/s), io=102MiB (107MB), run=2005-2005msec 00:24:43.334 WRITE: bw=50.8MiB/s (53.3MB/s), 50.8MiB/s-50.8MiB/s (53.3MB/s-53.3MB/s), io=102MiB (107MB), run=2005-2005msec 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:43.334 07:39:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:43.594 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:43.594 fio-3.35 00:24:43.594 Starting 1 thread 00:24:45.598 [2024-11-20 07:39:03.662034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14e40 is same with the state(6) to be set 00:24:45.598 [2024-11-20 07:39:03.662074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14e40 is same with the state(6) to be set 00:24:45.598 [2024-11-20 07:39:03.662080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14e40 is same with the state(6) to be set 00:24:45.859 00:24:45.859 test: (groupid=0, jobs=1): err= 0: pid=3506984: Wed Nov 20 07:39:03 2024 00:24:45.859 read: IOPS=9658, BW=151MiB/s (158MB/s)(302MiB/2003msec) 00:24:45.859 slat (usec): min=3, max=110, avg= 3.60, stdev= 1.58 00:24:45.860 clat (usec): min=1469, max=16669, avg=8017.77, stdev=1988.56 00:24:45.860 lat (usec): min=1472, max=16673, avg=8021.37, stdev=1988.69 00:24:45.860 clat percentiles (usec): 00:24:45.860 | 1.00th=[ 4015], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6194], 00:24:45.860 | 30.00th=[ 6849], 40.00th=[ 7308], 50.00th=[ 7898], 60.00th=[ 8455], 00:24:45.860 | 70.00th=[ 9110], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11469], 00:24:45.860 | 99.00th=[12780], 99.50th=[13435], 99.90th=[14222], 99.95th=[14484], 00:24:45.860 | 99.99th=[14877] 00:24:45.860 bw ( KiB/s): min=69984, max=86336, per=49.82%, avg=76992.00, stdev=6854.03, samples=4 00:24:45.860 iops : min= 4374, max= 5396, avg=4812.00, stdev=428.38, samples=4 00:24:45.860 write: IOPS=5701, BW=89.1MiB/s (93.4MB/s)(158MiB/1769msec); 0 zone resets 00:24:45.860 slat (usec): min=39, max=448, avg=40.92, stdev= 8.07 00:24:45.860 clat (usec): min=1682, max=16437, avg=9079.96, stdev=1419.74 00:24:45.860 lat (usec): min=1722, max=16575, avg=9120.88, stdev=1421.73 00:24:45.860 clat percentiles (usec): 00:24:45.860 | 1.00th=[ 6259], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 7832], 00:24:45.860 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:24:45.860 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10814], 95.00th=[11469], 00:24:45.860 | 99.00th=[12649], 99.50th=[13435], 99.90th=[15401], 99.95th=[15926], 00:24:45.860 | 99.99th=[16319] 00:24:45.860 bw ( KiB/s): min=72992, max=89568, per=88.05%, avg=80320.00, stdev=6860.50, samples=4 00:24:45.860 iops : min= 4562, max= 5598, avg=5020.00, stdev=428.78, samples=4 00:24:45.860 lat (msec) : 2=0.05%, 4=0.67%, 10=78.41%, 20=20.87% 00:24:45.860 cpu : usr=84.62%, sys=14.44%, ctx=13, majf=0, minf=33 00:24:45.860 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:45.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:45.860 issued rwts: total=19346,10086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:45.860 00:24:45.860 Run status group 0 (all jobs): 00:24:45.860 READ: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=302MiB (317MB), run=2003-2003msec 00:24:45.860 WRITE: bw=89.1MiB/s (93.4MB/s), 89.1MiB/s-89.1MiB/s (93.4MB/s-93.4MB/s), io=158MiB (165MB), run=1769-1769msec 00:24:45.860 07:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:46.120 rmmod nvme_tcp 00:24:46.120 rmmod nvme_fabrics 00:24:46.120 rmmod nvme_keyring 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3505451 ']' 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3505451 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 3505451 ']' 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 3505451 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3505451 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:46.120 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3505451' 00:24:46.121 killing process with pid 3505451 00:24:46.121 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 3505451 00:24:46.121 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 3505451 00:24:46.381 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:46.381 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:46.381 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:46.381 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:46.381 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:46.381 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:46.381 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:46.381 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:46.381 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:46.381 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.381 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.381 07:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.297 07:39:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:48.297 00:24:48.297 real 0m17.852s 00:24:48.297 user 1m7.073s 00:24:48.297 sys 0m7.760s 00:24:48.297 07:39:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:48.297 07:39:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.297 ************************************ 00:24:48.297 END TEST nvmf_fio_host 00:24:48.297 ************************************ 00:24:48.297 07:39:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:48.297 07:39:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:48.297 07:39:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:48.297 07:39:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.561 ************************************ 00:24:48.561 START TEST nvmf_failover 00:24:48.561 ************************************ 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:48.561 * Looking for test storage... 00:24:48.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:48.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.561 --rc genhtml_branch_coverage=1 00:24:48.561 --rc genhtml_function_coverage=1 00:24:48.561 --rc genhtml_legend=1 00:24:48.561 --rc geninfo_all_blocks=1 00:24:48.561 --rc geninfo_unexecuted_blocks=1 00:24:48.561 00:24:48.561 ' 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:48.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.561 --rc genhtml_branch_coverage=1 00:24:48.561 --rc genhtml_function_coverage=1 00:24:48.561 --rc genhtml_legend=1 00:24:48.561 --rc geninfo_all_blocks=1 00:24:48.561 --rc geninfo_unexecuted_blocks=1 00:24:48.561 00:24:48.561 ' 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:48.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.561 --rc genhtml_branch_coverage=1 00:24:48.561 --rc genhtml_function_coverage=1 00:24:48.561 --rc genhtml_legend=1 00:24:48.561 --rc geninfo_all_blocks=1 00:24:48.561 --rc geninfo_unexecuted_blocks=1 00:24:48.561 00:24:48.561 ' 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:48.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.561 --rc genhtml_branch_coverage=1 00:24:48.561 --rc genhtml_function_coverage=1 00:24:48.561 --rc genhtml_legend=1 00:24:48.561 --rc geninfo_all_blocks=1 00:24:48.561 --rc geninfo_unexecuted_blocks=1 00:24:48.561 00:24:48.561 ' 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.561 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:48.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.562 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.823 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:48.823 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:48.824 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:48.824 07:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.965 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:56.966 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:56.966 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:56.966 Found net devices under 0000:31:00.0: cvl_0_0 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:56.966 Found net devices under 0000:31:00.1: cvl_0_1 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:56.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:24:56.966 00:24:56.966 --- 10.0.0.2 ping statistics --- 00:24:56.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.966 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:24:56.966 00:24:56.966 --- 10.0.0.1 ping statistics --- 00:24:56.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.966 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3512189 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3512189 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3512189 ']' 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:56.966 07:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:56.967 [2024-11-20 07:39:14.544967] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:24:56.967 [2024-11-20 07:39:14.545045] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.967 [2024-11-20 07:39:14.646042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:56.967 [2024-11-20 07:39:14.697129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.967 [2024-11-20 07:39:14.697189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.967 [2024-11-20 07:39:14.697198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.967 [2024-11-20 07:39:14.697205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.967 [2024-11-20 07:39:14.697211] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.967 [2024-11-20 07:39:14.699102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:56.967 [2024-11-20 07:39:14.699275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:56.967 [2024-11-20 07:39:14.699279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.228 07:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:57.228 07:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:24:57.228 07:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:57.228 07:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:57.228 07:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.228 07:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.228 07:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:57.488 [2024-11-20 07:39:15.576764] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.488 07:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:57.749 Malloc0 00:24:57.749 07:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:58.010 07:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:58.271 07:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.271 [2024-11-20 07:39:16.397721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.271 07:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:58.531 [2024-11-20 07:39:16.594261] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:58.531 07:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:58.792 [2024-11-20 07:39:16.790969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:58.792 07:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3512599 00:24:58.792 07:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:58.792 07:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:58.792 07:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3512599 /var/tmp/bdevperf.sock 00:24:58.792 07:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3512599 ']' 00:24:58.792 07:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:58.792 07:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:58.792 07:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:58.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:58.792 07:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:58.792 07:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:59.732 07:39:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:59.732 07:39:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:24:59.732 07:39:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:59.992 NVMe0n1 00:24:59.993 07:39:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:00.253 00:25:00.253 07:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3512935 00:25:00.253 07:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:00.253 07:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:01.209 07:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.209 [2024-11-20 07:39:19.391582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.209 [2024-11-20 07:39:19.391630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.209 [2024-11-20 07:39:19.391636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.209 [2024-11-20 07:39:19.391641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.209 [2024-11-20 07:39:19.391645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.209 [2024-11-20 07:39:19.391650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.209 [2024-11-20 07:39:19.391655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.209 [2024-11-20 07:39:19.391659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.210 [2024-11-20 07:39:19.391767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.211 [2024-11-20 07:39:19.391899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.212 [2024-11-20 07:39:19.391903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.212 [2024-11-20 07:39:19.391908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.212 [2024-11-20 07:39:19.391912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.212 [2024-11-20 07:39:19.391916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.212 [2024-11-20 07:39:19.391921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.212 [2024-11-20 07:39:19.391925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.212 [2024-11-20 07:39:19.391930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.212 [2024-11-20 07:39:19.391936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.212 [2024-11-20 07:39:19.391940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.212 [2024-11-20 07:39:19.391945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21256d0 is same with the state(6) to be set 00:25:01.473 07:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:04.771 07:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:04.771 00:25:04.771 07:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:04.771 07:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:08.070 07:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.070 [2024-11-20 07:39:26.069093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.070 07:39:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:09.011 07:39:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:09.272 [2024-11-20 07:39:27.260996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.272 [2024-11-20 07:39:27.261120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 [2024-11-20 07:39:27.261229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22717a0 is same with the state(6) to be set 00:25:09.273 07:39:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3512935 00:25:15.863 { 00:25:15.863 "results": [ 00:25:15.863 { 00:25:15.863 "job": "NVMe0n1", 00:25:15.863 "core_mask": "0x1", 00:25:15.863 "workload": "verify", 00:25:15.863 "status": "finished", 00:25:15.863 "verify_range": { 00:25:15.863 "start": 0, 00:25:15.863 "length": 16384 00:25:15.863 }, 00:25:15.863 "queue_depth": 128, 00:25:15.863 "io_size": 4096, 00:25:15.863 "runtime": 15.002325, 00:25:15.863 "iops": 12483.06512490564, 00:25:15.863 "mibps": 48.76197314416265, 00:25:15.863 "io_failed": 4813, 00:25:15.863 "io_timeout": 0, 00:25:15.863 "avg_latency_us": 9975.893626532285, 00:25:15.863 "min_latency_us": 539.3066666666666, 00:25:15.863 "max_latency_us": 19005.44 00:25:15.863 } 00:25:15.863 ], 00:25:15.863 "core_count": 1 00:25:15.863 } 00:25:15.863 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3512599 00:25:15.863 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3512599 ']' 00:25:15.863 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3512599 00:25:15.863 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:15.863 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:15.863 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3512599 00:25:15.863 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:15.863 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:15.863 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3512599' 00:25:15.863 killing process with pid 3512599 00:25:15.863 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3512599 00:25:15.863 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3512599 00:25:15.863 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:15.863 [2024-11-20 07:39:16.868560] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:25:15.863 [2024-11-20 07:39:16.868636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3512599 ] 00:25:15.863 [2024-11-20 07:39:16.962218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.863 [2024-11-20 07:39:17.014981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.863 Running I/O for 15 seconds... 00:25:15.863 10955.00 IOPS, 42.79 MiB/s [2024-11-20T06:39:34.073Z] [2024-11-20 07:39:19.395136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-11-20 07:39:19.395170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-11-20 07:39:19.395195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-11-20 07:39:19.395213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-11-20 07:39:19.395230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-11-20 07:39:19.395247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-11-20 07:39:19.395264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-11-20 07:39:19.395281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-11-20 07:39:19.395298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-11-20 07:39:19.395315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-11-20 07:39:19.395332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-11-20 07:39:19.395349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-11-20 07:39:19.395372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-11-20 07:39:19.395388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-11-20 07:39:19.395405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.863 [2024-11-20 07:39:19.395423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.863 [2024-11-20 07:39:19.395439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.863 [2024-11-20 07:39:19.395456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.863 [2024-11-20 07:39:19.395473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.863 [2024-11-20 07:39:19.395490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-11-20 07:39:19.395500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.863 [2024-11-20 07:39:19.395507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.395985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.395992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.396002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.396009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.396020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.396027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.396036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.396043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.396053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.396060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.396069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.396076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.396085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.396092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.396101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.396109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.396118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.396125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.396134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.396141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.396150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.396157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.396167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.396174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-11-20 07:39:19.396183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.864 [2024-11-20 07:39:19.396190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-11-20 07:39:19.396278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-11-20 07:39:19.396294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-11-20 07:39:19.396311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-11-20 07:39:19.396328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-11-20 07:39:19.396344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-11-20 07:39:19.396360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-11-20 07:39:19.396377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.865 [2024-11-20 07:39:19.396845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-11-20 07:39:19.396854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.866 [2024-11-20 07:39:19.396863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.396872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.866 [2024-11-20 07:39:19.396880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.396889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.866 [2024-11-20 07:39:19.396896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.396905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.866 [2024-11-20 07:39:19.396913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.396936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.396944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95688 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.396952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.396963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.396969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.396975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95696 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.396982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.396989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.396995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95704 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95712 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95720 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95728 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95736 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95744 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95752 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95760 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95768 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95776 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95784 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95792 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95800 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95808 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95816 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95824 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95832 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95840 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-11-20 07:39:19.397466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.866 [2024-11-20 07:39:19.397472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.866 [2024-11-20 07:39:19.397478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95848 len:8 PRP1 0x0 PRP2 0x0 00:25:15.866 [2024-11-20 07:39:19.397485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:19.397494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.867 [2024-11-20 07:39:19.397499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.867 [2024-11-20 07:39:19.397505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95856 len:8 PRP1 0x0 PRP2 0x0 00:25:15.867 [2024-11-20 07:39:19.397512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:19.397520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.867 [2024-11-20 07:39:19.397525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.867 [2024-11-20 07:39:19.397531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95864 len:8 PRP1 0x0 PRP2 0x0 00:25:15.867 [2024-11-20 07:39:19.397538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:19.397545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.867 [2024-11-20 07:39:19.397550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.867 [2024-11-20 07:39:19.397556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95872 len:8 PRP1 0x0 PRP2 0x0 00:25:15.867 [2024-11-20 07:39:19.397563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:19.397571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.867 [2024-11-20 07:39:19.397576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.867 [2024-11-20 07:39:19.397583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95024 len:8 PRP1 0x0 PRP2 0x0 00:25:15.867 [2024-11-20 07:39:19.397589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:19.397632] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:15.867 [2024-11-20 07:39:19.397656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.867 [2024-11-20 07:39:19.397664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:19.397672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.867 [2024-11-20 07:39:19.397680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:19.397688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.867 [2024-11-20 07:39:19.407359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:19.407391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.867 [2024-11-20 07:39:19.407401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:19.407409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:15.867 [2024-11-20 07:39:19.407447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb90fc0 (9): Bad file descriptor 00:25:15.867 [2024-11-20 07:39:19.411015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:15.867 [2024-11-20 07:39:19.448375] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:15.867 10815.00 IOPS, 42.25 MiB/s [2024-11-20T06:39:34.077Z] 10903.33 IOPS, 42.59 MiB/s [2024-11-20T06:39:34.077Z] 11194.25 IOPS, 43.73 MiB/s [2024-11-20T06:39:34.077Z] [2024-11-20 07:39:22.877632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.867 [2024-11-20 07:39:22.877895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-11-20 07:39:22.877901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.877906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.877912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.877918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.877924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.877929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.877935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.877940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.877947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.877952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.877958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.877963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.877972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.877978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.877985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.877990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.877997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.868 [2024-11-20 07:39:22.878236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-11-20 07:39:22.878248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:33576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-11-20 07:39:22.878259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-11-20 07:39:22.878272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-11-20 07:39:22.878283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-11-20 07:39:22.878294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-11-20 07:39:22.878306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-11-20 07:39:22.878317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-11-20 07:39:22.878328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-11-20 07:39:22.878340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-11-20 07:39:22.878351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-11-20 07:39:22.878357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-11-20 07:39:22.878362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:33712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:33736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:33848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:33880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:33928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:33952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.869 [2024-11-20 07:39:22.878813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.869 [2024-11-20 07:39:22.878819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.878824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.878830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:33976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.878835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.878842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:33984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.878848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.878855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.878860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.878866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.878871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.878877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.878882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.878889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:34016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.878894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.878900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.878905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.878912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.878917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.878923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.878928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.878935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:34048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.878940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.878946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.878951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.878958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.878963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.878969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.878974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.878980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.878985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.878993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.878997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.879010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.879021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.879032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.879044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.879055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.879066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.879077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.879089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.879100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.879112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.879123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.879135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.870 [2024-11-20 07:39:22.879147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.870 [2024-11-20 07:39:22.879167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.870 [2024-11-20 07:39:22.879173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34584 len:8 PRP1 0x0 PRP2 0x0 00:25:15.870 [2024-11-20 07:39:22.879178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879214] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:15.870 [2024-11-20 07:39:22.879231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.870 [2024-11-20 07:39:22.879237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.870 [2024-11-20 07:39:22.879248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.870 [2024-11-20 07:39:22.879259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.870 [2024-11-20 07:39:22.879270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:22.879276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:15.870 [2024-11-20 07:39:22.879295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb90fc0 (9): Bad file descriptor 00:25:15.870 [2024-11-20 07:39:22.881730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:15.870 [2024-11-20 07:39:22.913136] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:15.870 11436.60 IOPS, 44.67 MiB/s [2024-11-20T06:39:34.080Z] 11676.83 IOPS, 45.61 MiB/s [2024-11-20T06:39:34.080Z] 11889.29 IOPS, 46.44 MiB/s [2024-11-20T06:39:34.080Z] 12018.75 IOPS, 46.95 MiB/s [2024-11-20T06:39:34.080Z] [2024-11-20 07:39:27.262375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.870 [2024-11-20 07:39:27.262403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:27.262416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.870 [2024-11-20 07:39:27.262423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.870 [2024-11-20 07:39:27.262430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.871 [2024-11-20 07:39:27.262873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.871 [2024-11-20 07:39:27.262894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.871 [2024-11-20 07:39:27.262900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.262906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.872 [2024-11-20 07:39:27.262911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.262918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.872 [2024-11-20 07:39:27.262923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.262930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.872 [2024-11-20 07:39:27.262935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.262941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.872 [2024-11-20 07:39:27.262946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.262953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.872 [2024-11-20 07:39:27.262958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.262964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.872 [2024-11-20 07:39:27.262969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.262976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.262980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.262987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.262992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.262999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.872 [2024-11-20 07:39:27.263290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.872 [2024-11-20 07:39:27.263296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.873 [2024-11-20 07:39:27.263301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.873 [2024-11-20 07:39:27.263312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.873 [2024-11-20 07:39:27.263325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.873 [2024-11-20 07:39:27.263337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.873 [2024-11-20 07:39:27.263348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.873 [2024-11-20 07:39:27.263360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.873 [2024-11-20 07:39:27.263371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.873 [2024-11-20 07:39:27.263382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.873 [2024-11-20 07:39:27.263743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.873 [2024-11-20 07:39:27.263752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.874 [2024-11-20 07:39:27.263768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.874 [2024-11-20 07:39:27.263774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108920 len:8 PRP1 0x0 PRP2 0x0 00:25:15.874 [2024-11-20 07:39:27.263780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.874 [2024-11-20 07:39:27.263788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.874 [2024-11-20 07:39:27.263792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.874 [2024-11-20 07:39:27.263796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108928 len:8 PRP1 0x0 PRP2 0x0 00:25:15.874 [2024-11-20 07:39:27.263801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.874 [2024-11-20 07:39:27.263806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.874 [2024-11-20 07:39:27.263810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.874 [2024-11-20 07:39:27.263814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108936 len:8 PRP1 0x0 PRP2 0x0 00:25:15.874 [2024-11-20 07:39:27.263819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.874 [2024-11-20 07:39:27.263824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.874 [2024-11-20 07:39:27.263827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.874 [2024-11-20 07:39:27.263832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108944 len:8 PRP1 0x0 PRP2 0x0 00:25:15.874 [2024-11-20 07:39:27.263837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.874 [2024-11-20 07:39:27.263842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.874 [2024-11-20 07:39:27.263845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.874 [2024-11-20 07:39:27.263849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108952 len:8 PRP1 0x0 PRP2 0x0 00:25:15.874 [2024-11-20 07:39:27.263854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.874 [2024-11-20 07:39:27.263860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.874 [2024-11-20 07:39:27.263863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.874 [2024-11-20 07:39:27.263868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108960 len:8 PRP1 0x0 PRP2 0x0 00:25:15.874 [2024-11-20 07:39:27.263872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.874 [2024-11-20 07:39:27.263877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.874 [2024-11-20 07:39:27.263881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.874 [2024-11-20 07:39:27.263886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108968 len:8 PRP1 0x0 PRP2 0x0 00:25:15.874 [2024-11-20 07:39:27.263891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.874 [2024-11-20 07:39:27.263897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.874 [2024-11-20 07:39:27.263901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.874 [2024-11-20 07:39:27.263905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108976 len:8 PRP1 0x0 PRP2 0x0 00:25:15.874 [2024-11-20 07:39:27.263910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.874 [2024-11-20 07:39:27.263915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.874 [2024-11-20 07:39:27.263920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.874 [2024-11-20 07:39:27.263924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108984 len:8 PRP1 0x0 PRP2 0x0 00:25:15.874 [2024-11-20 07:39:27.263929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.874 [2024-11-20 07:39:27.263934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.874 [2024-11-20 07:39:27.263938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.874 [2024-11-20 07:39:27.263943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108992 len:8 PRP1 0x0 PRP2 0x0 00:25:15.874 [2024-11-20 07:39:27.263947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.874 [2024-11-20 07:39:27.263953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.874 [2024-11-20 07:39:27.263956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.874 [2024-11-20 07:39:27.263961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109000 len:8 PRP1 0x0 PRP2 0x0 00:25:15.874 [2024-11-20 07:39:27.263966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.874 [2024-11-20 07:39:27.263971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:15.874 [2024-11-20 07:39:27.263974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:15.874 [2024-11-20 07:39:27.263978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109008 len:8 PRP1 0x0 PRP2 0x0 00:25:15.874 [2024-11-20 07:39:27.263983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.874 [2024-11-20 07:39:27.275075] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:15.874 [2024-11-20 07:39:27.275118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.874 [2024-11-20 07:39:27.275126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.874 [2024-11-20 07:39:27.275134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.874 [2024-11-20 07:39:27.275139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.874 [2024-11-20 07:39:27.275145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.874 [2024-11-20 07:39:27.275150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.874 [2024-11-20 07:39:27.275156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.874 [2024-11-20 07:39:27.275161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.874 [2024-11-20 07:39:27.275166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:15.874 [2024-11-20 07:39:27.275190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb90fc0 (9): Bad file descriptor 00:25:15.874 [2024-11-20 07:39:27.277650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:15.874 [2024-11-20 07:39:27.302002] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:15.874 12097.44 IOPS, 47.26 MiB/s [2024-11-20T06:39:34.084Z] 12211.40 IOPS, 47.70 MiB/s [2024-11-20T06:39:34.084Z] 12275.09 IOPS, 47.95 MiB/s [2024-11-20T06:39:34.084Z] 12328.42 IOPS, 48.16 MiB/s [2024-11-20T06:39:34.084Z] 12389.62 IOPS, 48.40 MiB/s [2024-11-20T06:39:34.084Z] 12441.00 IOPS, 48.60 MiB/s 00:25:15.874 Latency(us) 00:25:15.874 [2024-11-20T06:39:34.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.874 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:15.874 Verification LBA range: start 0x0 length 0x4000 00:25:15.874 NVMe0n1 : 15.00 12483.07 48.76 320.82 0.00 9975.89 539.31 19005.44 00:25:15.874 [2024-11-20T06:39:34.084Z] =================================================================================================================== 00:25:15.874 [2024-11-20T06:39:34.084Z] Total : 12483.07 48.76 320.82 0.00 9975.89 539.31 19005.44 00:25:15.874 Received shutdown signal, test time was about 15.000000 seconds 00:25:15.874 00:25:15.874 Latency(us) 00:25:15.874 [2024-11-20T06:39:34.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.874 [2024-11-20T06:39:34.084Z] =================================================================================================================== 00:25:15.874 [2024-11-20T06:39:34.084Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.874 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:15.874 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:15.874 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:15.874 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3515800 00:25:15.874 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3515800 /var/tmp/bdevperf.sock 00:25:15.874 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:15.874 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3515800 ']' 00:25:15.874 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.874 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:15.874 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.874 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:15.874 07:39:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:16.445 07:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:16.445 07:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:16.445 07:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:16.445 [2024-11-20 07:39:34.562811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:16.445 07:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:16.706 [2024-11-20 07:39:34.739231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:16.706 07:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:16.966 NVMe0n1 00:25:16.966 07:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.227 00:25:17.227 07:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.487 00:25:17.487 07:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.487 07:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:17.748 07:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:17.748 07:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:21.045 07:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:21.045 07:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:21.045 07:39:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3516943 00:25:21.045 07:39:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:21.045 07:39:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3516943 00:25:21.987 { 00:25:21.987 "results": [ 00:25:21.987 { 00:25:21.987 "job": "NVMe0n1", 00:25:21.987 "core_mask": "0x1", 00:25:21.987 "workload": "verify", 00:25:21.987 "status": "finished", 00:25:21.987 "verify_range": { 00:25:21.987 "start": 0, 00:25:21.987 "length": 16384 00:25:21.987 }, 00:25:21.987 "queue_depth": 128, 00:25:21.987 "io_size": 4096, 00:25:21.987 "runtime": 1.008204, 00:25:21.987 "iops": 12754.363204272151, 00:25:21.987 "mibps": 49.82173126668809, 00:25:21.987 "io_failed": 0, 00:25:21.987 "io_timeout": 0, 00:25:21.987 "avg_latency_us": 10000.815557456515, 00:25:21.987 "min_latency_us": 2211.84, 00:25:21.987 "max_latency_us": 8519.68 00:25:21.987 } 00:25:21.987 ], 00:25:21.987 "core_count": 1 00:25:21.987 } 00:25:22.249 07:39:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:22.249 [2024-11-20 07:39:33.621711] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:25:22.249 [2024-11-20 07:39:33.621776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3515800 ] 00:25:22.249 [2024-11-20 07:39:33.707302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.249 [2024-11-20 07:39:33.736117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.249 [2024-11-20 07:39:35.877493] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:22.249 [2024-11-20 07:39:35.877532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.249 [2024-11-20 07:39:35.877542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.249 [2024-11-20 07:39:35.877549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.249 [2024-11-20 07:39:35.877555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.249 [2024-11-20 07:39:35.877561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.249 [2024-11-20 07:39:35.877566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.249 [2024-11-20 07:39:35.877572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.250 [2024-11-20 07:39:35.877577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.250 [2024-11-20 07:39:35.877583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:22.250 [2024-11-20 07:39:35.877603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:22.250 [2024-11-20 07:39:35.877614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afcfc0 (9): Bad file descriptor 00:25:22.250 [2024-11-20 07:39:36.010922] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:22.250 Running I/O for 1 seconds... 00:25:22.250 12730.00 IOPS, 49.73 MiB/s 00:25:22.250 Latency(us) 00:25:22.250 [2024-11-20T06:39:40.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.250 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:22.250 Verification LBA range: start 0x0 length 0x4000 00:25:22.250 NVMe0n1 : 1.01 12754.36 49.82 0.00 0.00 10000.82 2211.84 8519.68 00:25:22.250 [2024-11-20T06:39:40.460Z] =================================================================================================================== 00:25:22.250 [2024-11-20T06:39:40.460Z] Total : 12754.36 49.82 0.00 0.00 10000.82 2211.84 8519.68 00:25:22.250 07:39:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.250 07:39:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:22.250 07:39:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.511 07:39:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.511 07:39:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:22.772 07:39:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.772 07:39:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:26.071 07:39:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.071 07:39:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:26.071 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3515800 00:25:26.071 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3515800 ']' 00:25:26.071 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3515800 00:25:26.071 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:26.071 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:26.071 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3515800 00:25:26.071 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:26.071 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:26.071 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3515800' 00:25:26.071 killing process with pid 3515800 00:25:26.071 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3515800 00:25:26.071 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3515800 00:25:26.331 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:26.331 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:26.331 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:26.331 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:26.331 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:26.331 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:26.331 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:26.331 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:26.331 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:26.331 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:26.331 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:26.331 rmmod nvme_tcp 00:25:26.331 rmmod nvme_fabrics 00:25:26.331 rmmod nvme_keyring 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3512189 ']' 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3512189 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3512189 ']' 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3512189 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3512189 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3512189' 00:25:26.592 killing process with pid 3512189 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3512189 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3512189 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.592 07:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.137 07:39:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:29.137 00:25:29.137 real 0m40.306s 00:25:29.137 user 2m2.843s 00:25:29.137 sys 0m9.051s 00:25:29.137 07:39:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:29.137 07:39:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:29.137 ************************************ 00:25:29.137 END TEST nvmf_failover 00:25:29.137 ************************************ 00:25:29.137 07:39:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:29.137 07:39:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:29.137 07:39:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:29.137 07:39:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.137 ************************************ 00:25:29.137 START TEST nvmf_host_discovery 00:25:29.137 ************************************ 00:25:29.137 07:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:29.137 * Looking for test storage... 00:25:29.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:29.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:29.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:25:29.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:29.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:29.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:29.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:29.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:29.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:29.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:29.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:29.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:29.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:29.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:29.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:29.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:29.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:29.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:29.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.138 --rc genhtml_branch_coverage=1 00:25:29.138 --rc genhtml_function_coverage=1 00:25:29.138 --rc genhtml_legend=1 00:25:29.138 --rc geninfo_all_blocks=1 00:25:29.138 --rc geninfo_unexecuted_blocks=1 00:25:29.138 00:25:29.138 ' 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:29.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.138 --rc genhtml_branch_coverage=1 00:25:29.138 --rc genhtml_function_coverage=1 00:25:29.138 --rc genhtml_legend=1 00:25:29.138 --rc geninfo_all_blocks=1 00:25:29.138 --rc geninfo_unexecuted_blocks=1 00:25:29.138 00:25:29.138 ' 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:29.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.138 --rc genhtml_branch_coverage=1 00:25:29.138 --rc genhtml_function_coverage=1 00:25:29.138 --rc genhtml_legend=1 00:25:29.138 --rc geninfo_all_blocks=1 00:25:29.138 --rc geninfo_unexecuted_blocks=1 00:25:29.138 00:25:29.138 ' 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:29.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.138 --rc genhtml_branch_coverage=1 00:25:29.138 --rc genhtml_function_coverage=1 00:25:29.138 --rc genhtml_legend=1 00:25:29.138 --rc geninfo_all_blocks=1 00:25:29.138 --rc geninfo_unexecuted_blocks=1 00:25:29.138 00:25:29.138 ' 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:29.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:29.138 07:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.280 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:37.281 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:37.281 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:37.281 Found net devices under 0000:31:00.0: cvl_0_0 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:37.281 Found net devices under 0000:31:00.1: cvl_0_1 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:37.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:25:37.281 00:25:37.281 --- 10.0.0.2 ping statistics --- 00:25:37.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.281 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:25:37.281 00:25:37.281 --- 10.0.0.1 ping statistics --- 00:25:37.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.281 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3522100 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3522100 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3522100 ']' 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:37.281 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.282 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:37.282 07:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.282 [2024-11-20 07:39:54.872526] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:25:37.282 [2024-11-20 07:39:54.872589] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.282 [2024-11-20 07:39:54.971305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.282 [2024-11-20 07:39:55.022067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.282 [2024-11-20 07:39:55.022113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.282 [2024-11-20 07:39:55.022121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.282 [2024-11-20 07:39:55.022128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.282 [2024-11-20 07:39:55.022134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.282 [2024-11-20 07:39:55.022927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.542 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:37.542 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:25:37.542 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:37.542 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:37.542 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.542 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.542 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:37.542 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.542 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.542 [2024-11-20 07:39:55.739479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.542 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.542 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:37.542 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.542 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.811 [2024-11-20 07:39:55.751739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:37.811 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.811 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:37.811 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.811 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.811 null0 00:25:37.811 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:37.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.812 null1 00:25:37.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:37.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3522358 00:25:37.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:37.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3522358 /tmp/host.sock 00:25:37.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3522358 ']' 00:25:37.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:25:37.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:37.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:37.812 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:37.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:37.812 07:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.812 [2024-11-20 07:39:55.848089] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:25:37.812 [2024-11-20 07:39:55.848148] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3522358 ] 00:25:37.812 [2024-11-20 07:39:55.940199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.812 [2024-11-20 07:39:55.993439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:38.754 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:38.755 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.755 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.755 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.755 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:38.755 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.755 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.755 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.755 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.755 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.755 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.755 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.016 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:39.016 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:39.016 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.016 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.016 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.016 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.016 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.016 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.016 07:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.016 [2024-11-20 07:39:57.019011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:25:39.016 07:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:25:39.591 [2024-11-20 07:39:57.729997] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:39.591 [2024-11-20 07:39:57.730028] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:39.591 [2024-11-20 07:39:57.730043] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:39.851 [2024-11-20 07:39:57.817320] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:39.851 [2024-11-20 07:39:57.877260] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:39.851 [2024-11-20 07:39:57.878605] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xa538c0:1 started. 00:25:39.851 [2024-11-20 07:39:57.880531] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:39.851 [2024-11-20 07:39:57.880562] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:39.851 [2024-11-20 07:39:57.887455] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xa538c0 was disconnected and freed. delete nvme_qpair. 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:40.112 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.373 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.634 [2024-11-20 07:39:58.677828] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xa53aa0:1 started. 00:25:40.634 [2024-11-20 07:39:58.689092] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xa53aa0 was disconnected and freed. delete nvme_qpair. 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.634 [2024-11-20 07:39:58.767957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:40.634 [2024-11-20 07:39:58.768423] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:40.634 [2024-11-20 07:39:58.768445] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.634 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.894 [2024-11-20 07:39:58.856167] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:40.894 07:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:25:41.170 [2024-11-20 07:39:59.158766] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:41.170 [2024-11-20 07:39:59.158805] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:41.170 [2024-11-20 07:39:59.158814] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:41.170 [2024-11-20 07:39:59.158820] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:41.740 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.740 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:41.740 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:41.740 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:41.740 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:41.741 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.741 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:41.741 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.741 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:42.002 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.002 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:42.002 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.002 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:42.002 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:42.002 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.002 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.002 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.002 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.002 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.002 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:42.002 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:42.002 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.002 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.002 07:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.002 [2024-11-20 07:40:00.044490] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:42.002 [2024-11-20 07:40:00.044512] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:42.002 [2024-11-20 07:40:00.052586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.002 [2024-11-20 07:40:00.052604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.002 [2024-11-20 07:40:00.052615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.002 [2024-11-20 07:40:00.052623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.002 [2024-11-20 07:40:00.052629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.002 [2024-11-20 07:40:00.052634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.002 [2024-11-20 07:40:00.052640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.002 [2024-11-20 07:40:00.052646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.002 [2024-11-20 07:40:00.052655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23fd0 is same with the state(6) to be set 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.002 [2024-11-20 07:40:00.062599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa23fd0 (9): Bad file descriptor 00:25:42.002 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.002 [2024-11-20 07:40:00.072636] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.002 [2024-11-20 07:40:00.072652] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.002 [2024-11-20 07:40:00.072657] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.002 [2024-11-20 07:40:00.072661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.002 [2024-11-20 07:40:00.072680] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.002 [2024-11-20 07:40:00.073163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-20 07:40:00.073194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa23fd0 with addr=10.0.0.2, port=4420 00:25:42.002 [2024-11-20 07:40:00.073203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23fd0 is same with the state(6) to be set 00:25:42.002 [2024-11-20 07:40:00.073220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa23fd0 (9): Bad file descriptor 00:25:42.002 [2024-11-20 07:40:00.073231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.003 [2024-11-20 07:40:00.073237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.003 [2024-11-20 07:40:00.073245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.003 [2024-11-20 07:40:00.073253] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.003 [2024-11-20 07:40:00.073258] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.003 [2024-11-20 07:40:00.073262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.003 [2024-11-20 07:40:00.082710] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.003 [2024-11-20 07:40:00.082720] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.003 [2024-11-20 07:40:00.082724] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.003 [2024-11-20 07:40:00.082727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.003 [2024-11-20 07:40:00.082739] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.003 [2024-11-20 07:40:00.083065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-20 07:40:00.083096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa23fd0 with addr=10.0.0.2, port=4420 00:25:42.003 [2024-11-20 07:40:00.083109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23fd0 is same with the state(6) to be set 00:25:42.003 [2024-11-20 07:40:00.083133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa23fd0 (9): Bad file descriptor 00:25:42.003 [2024-11-20 07:40:00.083143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.003 [2024-11-20 07:40:00.083148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.003 [2024-11-20 07:40:00.083154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.003 [2024-11-20 07:40:00.083160] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.003 [2024-11-20 07:40:00.083163] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.003 [2024-11-20 07:40:00.083166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.003 [2024-11-20 07:40:00.092770] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.003 [2024-11-20 07:40:00.092782] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.003 [2024-11-20 07:40:00.092785] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.003 [2024-11-20 07:40:00.092789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.003 [2024-11-20 07:40:00.092801] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.003 [2024-11-20 07:40:00.093158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-20 07:40:00.093168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa23fd0 with addr=10.0.0.2, port=4420 00:25:42.003 [2024-11-20 07:40:00.093174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23fd0 is same with the state(6) to be set 00:25:42.003 [2024-11-20 07:40:00.093182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa23fd0 (9): Bad file descriptor 00:25:42.003 [2024-11-20 07:40:00.093189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.003 [2024-11-20 07:40:00.093194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.003 [2024-11-20 07:40:00.093200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.003 [2024-11-20 07:40:00.093204] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.003 [2024-11-20 07:40:00.093207] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.003 [2024-11-20 07:40:00.093211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.003 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.003 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.003 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:42.003 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:42.003 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.003 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.003 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:42.003 [2024-11-20 07:40:00.102829] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.003 [2024-11-20 07:40:00.102838] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.003 [2024-11-20 07:40:00.102842] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.003 [2024-11-20 07:40:00.102845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.003 [2024-11-20 07:40:00.102856] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.003 [2024-11-20 07:40:00.103005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-20 07:40:00.103014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa23fd0 with addr=10.0.0.2, port=4420 00:25:42.003 [2024-11-20 07:40:00.103019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23fd0 is same with the state(6) to be set 00:25:42.003 [2024-11-20 07:40:00.103026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa23fd0 (9): Bad file descriptor 00:25:42.003 [2024-11-20 07:40:00.103034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.003 [2024-11-20 07:40:00.103039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.003 [2024-11-20 07:40:00.103044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.003 [2024-11-20 07:40:00.103049] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.003 [2024-11-20 07:40:00.103052] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.003 [2024-11-20 07:40:00.103055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.003 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:42.003 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.003 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.003 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.003 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.003 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.003 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.003 [2024-11-20 07:40:00.112885] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.003 [2024-11-20 07:40:00.112898] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.003 [2024-11-20 07:40:00.112901] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.003 [2024-11-20 07:40:00.112904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.003 [2024-11-20 07:40:00.112916] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.003 [2024-11-20 07:40:00.113260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-20 07:40:00.113270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa23fd0 with addr=10.0.0.2, port=4420 00:25:42.003 [2024-11-20 07:40:00.113275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23fd0 is same with the state(6) to be set 00:25:42.003 [2024-11-20 07:40:00.113284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa23fd0 (9): Bad file descriptor 00:25:42.003 [2024-11-20 07:40:00.113298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.003 [2024-11-20 07:40:00.113303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.003 [2024-11-20 07:40:00.113308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.003 [2024-11-20 07:40:00.113313] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.003 [2024-11-20 07:40:00.113316] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.003 [2024-11-20 07:40:00.113319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.003 [2024-11-20 07:40:00.122945] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.003 [2024-11-20 07:40:00.122953] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.003 [2024-11-20 07:40:00.122957] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.003 [2024-11-20 07:40:00.122960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.003 [2024-11-20 07:40:00.122970] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.003 [2024-11-20 07:40:00.123085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-20 07:40:00.123092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa23fd0 with addr=10.0.0.2, port=4420 00:25:42.003 [2024-11-20 07:40:00.123097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23fd0 is same with the state(6) to be set 00:25:42.003 [2024-11-20 07:40:00.123105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa23fd0 (9): Bad file descriptor 00:25:42.003 [2024-11-20 07:40:00.123113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.003 [2024-11-20 07:40:00.123117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.004 [2024-11-20 07:40:00.123122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.004 [2024-11-20 07:40:00.123126] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.004 [2024-11-20 07:40:00.123129] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.004 [2024-11-20 07:40:00.123133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.004 [2024-11-20 07:40:00.132998] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:42.004 [2024-11-20 07:40:00.133008] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:42.004 [2024-11-20 07:40:00.133011] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.004 [2024-11-20 07:40:00.133015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.004 [2024-11-20 07:40:00.133025] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.004 [2024-11-20 07:40:00.133337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-20 07:40:00.133346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa23fd0 with addr=10.0.0.2, port=4420 00:25:42.004 [2024-11-20 07:40:00.133352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa23fd0 is same with the state(6) to be set 00:25:42.004 [2024-11-20 07:40:00.133363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa23fd0 (9): Bad file descriptor 00:25:42.004 [2024-11-20 07:40:00.133371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.004 [2024-11-20 07:40:00.133376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.004 [2024-11-20 07:40:00.133381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.004 [2024-11-20 07:40:00.133385] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.004 [2024-11-20 07:40:00.133388] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.004 [2024-11-20 07:40:00.133391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.004 [2024-11-20 07:40:00.133762] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:42.004 [2024-11-20 07:40:00.133775] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:42.004 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.004 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:42.004 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.004 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:42.004 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:42.004 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.004 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.004 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:42.004 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:42.004 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:42.004 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:42.004 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.004 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:42.004 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.004 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:42.004 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.265 07:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.647 [2024-11-20 07:40:01.478911] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:43.647 [2024-11-20 07:40:01.478924] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:43.647 [2024-11-20 07:40:01.478933] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:43.647 [2024-11-20 07:40:01.567190] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:43.647 [2024-11-20 07:40:01.833335] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:43.647 [2024-11-20 07:40:01.833968] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xa3b1d0:1 started. 00:25:43.647 [2024-11-20 07:40:01.835344] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:43.647 [2024-11-20 07:40:01.835368] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:43.647 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.647 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:43.647 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:43.647 [2024-11-20 07:40:01.837562] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xa3b1d0 was disconnected and freed. delete nvme_qpair. 00:25:43.647 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:43.647 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:43.647 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:43.647 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:43.647 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:43.647 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:43.647 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.647 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.647 request: 00:25:43.647 { 00:25:43.647 "name": "nvme", 00:25:43.907 "trtype": "tcp", 00:25:43.907 "traddr": "10.0.0.2", 00:25:43.907 "adrfam": "ipv4", 00:25:43.907 "trsvcid": "8009", 00:25:43.907 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:43.907 "wait_for_attach": true, 00:25:43.907 "method": "bdev_nvme_start_discovery", 00:25:43.907 "req_id": 1 00:25:43.907 } 00:25:43.907 Got JSON-RPC error response 00:25:43.907 response: 00:25:43.907 { 00:25:43.907 "code": -17, 00:25:43.907 "message": "File exists" 00:25:43.907 } 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:43.907 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.908 request: 00:25:43.908 { 00:25:43.908 "name": "nvme_second", 00:25:43.908 "trtype": "tcp", 00:25:43.908 "traddr": "10.0.0.2", 00:25:43.908 "adrfam": "ipv4", 00:25:43.908 "trsvcid": "8009", 00:25:43.908 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:43.908 "wait_for_attach": true, 00:25:43.908 "method": "bdev_nvme_start_discovery", 00:25:43.908 "req_id": 1 00:25:43.908 } 00:25:43.908 Got JSON-RPC error response 00:25:43.908 response: 00:25:43.908 { 00:25:43.908 "code": -17, 00:25:43.908 "message": "File exists" 00:25:43.908 } 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:43.908 07:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.908 07:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.911 [2024-11-20 07:40:03.094791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.911 [2024-11-20 07:40:03.094813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3d900 with addr=10.0.0.2, port=8010 00:25:44.911 [2024-11-20 07:40:03.094823] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:44.911 [2024-11-20 07:40:03.094829] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:44.911 [2024-11-20 07:40:03.094834] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:46.299 [2024-11-20 07:40:04.097097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.299 [2024-11-20 07:40:04.097115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3d900 with addr=10.0.0.2, port=8010 00:25:46.299 [2024-11-20 07:40:04.097123] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:46.299 [2024-11-20 07:40:04.097128] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:46.300 [2024-11-20 07:40:04.097133] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:47.241 [2024-11-20 07:40:05.099125] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:47.241 request: 00:25:47.241 { 00:25:47.241 "name": "nvme_second", 00:25:47.241 "trtype": "tcp", 00:25:47.241 "traddr": "10.0.0.2", 00:25:47.241 "adrfam": "ipv4", 00:25:47.241 "trsvcid": "8010", 00:25:47.241 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:47.241 "wait_for_attach": false, 00:25:47.241 "attach_timeout_ms": 3000, 00:25:47.241 "method": "bdev_nvme_start_discovery", 00:25:47.241 "req_id": 1 00:25:47.241 } 00:25:47.241 Got JSON-RPC error response 00:25:47.241 response: 00:25:47.241 { 00:25:47.241 "code": -110, 00:25:47.241 "message": "Connection timed out" 00:25:47.241 } 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3522358 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:47.241 rmmod nvme_tcp 00:25:47.241 rmmod nvme_fabrics 00:25:47.241 rmmod nvme_keyring 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3522100 ']' 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3522100 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 3522100 ']' 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 3522100 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3522100 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3522100' 00:25:47.241 killing process with pid 3522100 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 3522100 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 3522100 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.241 07:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:49.787 00:25:49.787 real 0m20.575s 00:25:49.787 user 0m23.825s 00:25:49.787 sys 0m7.271s 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.787 ************************************ 00:25:49.787 END TEST nvmf_host_discovery 00:25:49.787 ************************************ 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.787 ************************************ 00:25:49.787 START TEST nvmf_host_multipath_status 00:25:49.787 ************************************ 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:49.787 * Looking for test storage... 00:25:49.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:49.787 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:49.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.788 --rc genhtml_branch_coverage=1 00:25:49.788 --rc genhtml_function_coverage=1 00:25:49.788 --rc genhtml_legend=1 00:25:49.788 --rc geninfo_all_blocks=1 00:25:49.788 --rc geninfo_unexecuted_blocks=1 00:25:49.788 00:25:49.788 ' 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:49.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.788 --rc genhtml_branch_coverage=1 00:25:49.788 --rc genhtml_function_coverage=1 00:25:49.788 --rc genhtml_legend=1 00:25:49.788 --rc geninfo_all_blocks=1 00:25:49.788 --rc geninfo_unexecuted_blocks=1 00:25:49.788 00:25:49.788 ' 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:49.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.788 --rc genhtml_branch_coverage=1 00:25:49.788 --rc genhtml_function_coverage=1 00:25:49.788 --rc genhtml_legend=1 00:25:49.788 --rc geninfo_all_blocks=1 00:25:49.788 --rc geninfo_unexecuted_blocks=1 00:25:49.788 00:25:49.788 ' 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:49.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.788 --rc genhtml_branch_coverage=1 00:25:49.788 --rc genhtml_function_coverage=1 00:25:49.788 --rc genhtml_legend=1 00:25:49.788 --rc geninfo_all_blocks=1 00:25:49.788 --rc geninfo_unexecuted_blocks=1 00:25:49.788 00:25:49.788 ' 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.788 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:49.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:49.789 07:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.928 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:57.929 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:57.929 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:57.929 Found net devices under 0000:31:00.0: cvl_0_0 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:57.929 Found net devices under 0000:31:00.1: cvl_0_1 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:57.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:25:57.929 00:25:57.929 --- 10.0.0.2 ping statistics --- 00:25:57.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.929 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:25:57.929 00:25:57.929 --- 10.0.0.1 ping statistics --- 00:25:57.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.929 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3528591 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3528591 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:57.929 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3528591 ']' 00:25:57.930 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.930 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:57.930 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.930 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:57.930 07:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:57.930 [2024-11-20 07:40:15.476584] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:25:57.930 [2024-11-20 07:40:15.476653] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.930 [2024-11-20 07:40:15.580198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:57.930 [2024-11-20 07:40:15.631442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.930 [2024-11-20 07:40:15.631501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.930 [2024-11-20 07:40:15.631510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.930 [2024-11-20 07:40:15.631517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.930 [2024-11-20 07:40:15.631523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.930 [2024-11-20 07:40:15.633266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.930 [2024-11-20 07:40:15.633269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.190 07:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:58.190 07:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:25:58.190 07:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:58.190 07:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:58.190 07:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.190 07:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.190 07:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3528591 00:25:58.191 07:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:58.452 [2024-11-20 07:40:16.509923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.452 07:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:58.714 Malloc0 00:25:58.714 07:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:58.975 07:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:58.975 07:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.236 [2024-11-20 07:40:17.321904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.236 07:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:59.498 [2024-11-20 07:40:17.514333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:59.498 07:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3528955 00:25:59.498 07:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:59.498 07:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:59.498 07:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3528955 /var/tmp/bdevperf.sock 00:25:59.498 07:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3528955 ']' 00:25:59.498 07:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:59.498 07:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:59.498 07:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:59.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:59.498 07:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:59.498 07:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:00.440 07:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:00.440 07:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:00.440 07:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:00.440 07:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:01.010 Nvme0n1 00:26:01.010 07:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:01.578 Nvme0n1 00:26:01.578 07:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:01.578 07:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:03.496 07:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:03.496 07:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:03.757 07:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:04.017 07:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:04.957 07:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:04.957 07:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:04.957 07:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.957 07:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.218 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.218 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:05.218 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.218 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.218 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.218 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.218 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.218 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.479 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.479 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.479 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.479 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:05.739 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.739 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:05.739 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.739 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:05.739 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.739 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:05.739 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.740 07:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:05.999 07:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.000 07:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:06.000 07:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:06.260 07:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:06.521 07:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:07.462 07:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:07.462 07:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:07.462 07:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.462 07:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:07.462 07:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.462 07:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:07.462 07:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.462 07:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.723 07:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.723 07:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.723 07:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.723 07:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.984 07:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.984 07:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:07.984 07:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.984 07:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.244 07:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.245 07:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:08.245 07:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.245 07:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.245 07:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.245 07:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:08.245 07:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.245 07:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.504 07:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.504 07:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:08.504 07:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:08.764 07:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:08.764 07:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:10.149 07:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:10.149 07:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:10.149 07:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.149 07:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.149 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.149 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:10.149 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.149 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.149 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.149 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.149 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.149 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:10.411 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.411 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:10.411 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.411 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:10.673 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.673 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:10.673 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.673 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:10.673 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.673 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:10.673 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.673 07:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:10.934 07:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.934 07:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:10.934 07:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:11.194 07:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:11.454 07:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:12.394 07:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:12.394 07:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:12.394 07:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.394 07:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:12.654 07:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.654 07:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:12.654 07:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.654 07:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:12.654 07:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:12.654 07:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:12.655 07:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.655 07:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:12.915 07:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.915 07:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:12.915 07:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.915 07:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.175 07:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.175 07:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:13.175 07:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.175 07:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:13.175 07:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.175 07:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:13.175 07:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.175 07:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:13.436 07:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.436 07:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:13.436 07:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:13.697 07:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:13.697 07:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:15.080 07:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:15.080 07:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:15.080 07:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.080 07:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.080 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.080 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:15.080 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.080 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.080 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.080 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.080 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.080 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.341 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.341 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:15.341 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.341 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:15.601 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.601 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:15.601 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.601 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:15.862 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.862 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:15.862 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.862 07:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:15.862 07:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.862 07:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:15.862 07:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:16.123 07:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:16.383 07:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:17.325 07:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:17.325 07:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:17.325 07:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.325 07:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:17.586 07:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.586 07:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:17.586 07:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.586 07:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:17.586 07:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.586 07:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:17.586 07:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.586 07:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:17.846 07:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.846 07:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:17.846 07:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.846 07:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:18.107 07:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.107 07:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:18.107 07:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.107 07:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:18.107 07:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.107 07:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:18.107 07:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.107 07:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:18.368 07:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.368 07:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:18.629 07:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:18.629 07:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:18.888 07:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:18.888 07:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:19.830 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:19.830 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:19.830 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.830 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:20.092 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.092 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:20.092 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.092 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:20.354 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.354 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:20.354 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.354 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:20.616 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.616 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:20.616 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.616 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:20.616 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.616 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:20.616 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:20.616 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.877 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.877 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:20.877 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.877 07:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:21.139 07:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.139 07:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:21.139 07:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:21.139 07:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:21.399 07:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:22.348 07:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:22.348 07:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:22.348 07:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.348 07:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.609 07:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.609 07:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:22.609 07:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.609 07:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:22.870 07:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.870 07:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:22.870 07:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.870 07:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:22.870 07:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.870 07:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:22.870 07:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.870 07:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:23.132 07:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.132 07:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:23.132 07:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.132 07:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:23.393 07:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.393 07:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:23.393 07:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.393 07:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:23.653 07:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.653 07:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:23.653 07:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:23.653 07:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:23.915 07:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:24.858 07:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:24.858 07:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:24.858 07:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.858 07:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.119 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.119 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:25.119 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.119 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:25.381 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.382 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:25.382 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.382 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:25.382 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.382 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:25.382 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.382 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.643 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.643 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:25.643 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.643 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:25.905 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.905 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:25.905 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.905 07:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:25.905 07:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.905 07:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:25.905 07:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:26.165 07:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:26.426 07:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:27.370 07:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:27.370 07:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:27.370 07:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.370 07:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:27.633 07:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.633 07:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:27.633 07:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.633 07:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.633 07:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.633 07:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.633 07:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.633 07:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.894 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.894 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.894 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.894 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:28.154 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.154 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:28.154 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.154 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:28.415 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.415 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:28.415 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.415 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:28.415 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.415 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3528955 00:26:28.415 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3528955 ']' 00:26:28.415 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3528955 00:26:28.415 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:28.415 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:28.415 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3528955 00:26:28.700 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:28.700 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:28.700 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3528955' 00:26:28.700 killing process with pid 3528955 00:26:28.700 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3528955 00:26:28.700 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3528955 00:26:28.700 { 00:26:28.700 "results": [ 00:26:28.700 { 00:26:28.700 "job": "Nvme0n1", 00:26:28.700 "core_mask": "0x4", 00:26:28.700 "workload": "verify", 00:26:28.700 "status": "terminated", 00:26:28.700 "verify_range": { 00:26:28.700 "start": 0, 00:26:28.700 "length": 16384 00:26:28.700 }, 00:26:28.700 "queue_depth": 128, 00:26:28.700 "io_size": 4096, 00:26:28.700 "runtime": 26.902961, 00:26:28.700 "iops": 11863.117966829004, 00:26:28.700 "mibps": 46.3403045579258, 00:26:28.700 "io_failed": 0, 00:26:28.700 "io_timeout": 0, 00:26:28.700 "avg_latency_us": 10769.443283524412, 00:26:28.700 "min_latency_us": 206.50666666666666, 00:26:28.700 "max_latency_us": 3019898.88 00:26:28.700 } 00:26:28.700 ], 00:26:28.700 "core_count": 1 00:26:28.700 } 00:26:28.700 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3528955 00:26:28.700 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:28.700 [2024-11-20 07:40:17.595152] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:26:28.700 [2024-11-20 07:40:17.595234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3528955 ] 00:26:28.700 [2024-11-20 07:40:17.694208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.700 [2024-11-20 07:40:17.744755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.700 Running I/O for 90 seconds... 00:26:28.700 11131.00 IOPS, 43.48 MiB/s [2024-11-20T06:40:46.910Z] 11182.50 IOPS, 43.68 MiB/s [2024-11-20T06:40:46.910Z] 11189.00 IOPS, 43.71 MiB/s [2024-11-20T06:40:46.910Z] 11601.75 IOPS, 45.32 MiB/s [2024-11-20T06:40:46.910Z] 11894.40 IOPS, 46.46 MiB/s [2024-11-20T06:40:46.910Z] 12074.17 IOPS, 47.16 MiB/s [2024-11-20T06:40:46.910Z] 12199.29 IOPS, 47.65 MiB/s [2024-11-20T06:40:46.910Z] 12288.38 IOPS, 48.00 MiB/s [2024-11-20T06:40:46.910Z] 12353.78 IOPS, 48.26 MiB/s [2024-11-20T06:40:46.910Z] 12422.90 IOPS, 48.53 MiB/s [2024-11-20T06:40:46.910Z] 12461.55 IOPS, 48.68 MiB/s [2024-11-20T06:40:46.910Z] [2024-11-20 07:40:31.680005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.700 [2024-11-20 07:40:31.680522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.700 [2024-11-20 07:40:31.680533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.680988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.680993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.681004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.681010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.681021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.681026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.681037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.681042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.681054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.681059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.681070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.681076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.681087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.681092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.681103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.681108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.681120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.681125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.681136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.681141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.681152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.681158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.701 [2024-11-20 07:40:31.681169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.701 [2024-11-20 07:40:31.681174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.681186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.702 [2024-11-20 07:40:31.681192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.681203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.702 [2024-11-20 07:40:31.681208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.681220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.702 [2024-11-20 07:40:31.681225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.681237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.702 [2024-11-20 07:40:31.681242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.681253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.702 [2024-11-20 07:40:31.681258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.702 [2024-11-20 07:40:31.682117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.702 [2024-11-20 07:40:31.682140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.702 [2024-11-20 07:40:31.682161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.702 [2024-11-20 07:40:31.682182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.702 [2024-11-20 07:40:31.682202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.702 [2024-11-20 07:40:31.682223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.702 [2024-11-20 07:40:31.682244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.702 [2024-11-20 07:40:31.682265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.702 [2024-11-20 07:40:31.682286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:31.682307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:31.682328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:31.682348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:31.682369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:31.682393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:31.682414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:31.682434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:31.682455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:31.682475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:31.682496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:31.682516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:31.682537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:31.682557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:31.682578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:31.682593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:31.682599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.702 12435.67 IOPS, 48.58 MiB/s [2024-11-20T06:40:46.912Z] 11479.08 IOPS, 44.84 MiB/s [2024-11-20T06:40:46.912Z] 10659.14 IOPS, 41.64 MiB/s [2024-11-20T06:40:46.912Z] 9984.73 IOPS, 39.00 MiB/s [2024-11-20T06:40:46.912Z] 10171.06 IOPS, 39.73 MiB/s [2024-11-20T06:40:46.912Z] 10329.82 IOPS, 40.35 MiB/s [2024-11-20T06:40:46.912Z] 10632.17 IOPS, 41.53 MiB/s [2024-11-20T06:40:46.912Z] 10935.84 IOPS, 42.72 MiB/s [2024-11-20T06:40:46.912Z] 11133.10 IOPS, 43.49 MiB/s [2024-11-20T06:40:46.912Z] 11215.62 IOPS, 43.81 MiB/s [2024-11-20T06:40:46.912Z] 11288.05 IOPS, 44.09 MiB/s [2024-11-20T06:40:46.912Z] 11465.96 IOPS, 44.79 MiB/s [2024-11-20T06:40:46.912Z] 11662.79 IOPS, 45.56 MiB/s [2024-11-20T06:40:46.912Z] [2024-11-20 07:40:44.411832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:44.411868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:44.411886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:44.411892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:44.411903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:44.411908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:44.411919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:44.411924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:44.411935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:44.411940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.702 [2024-11-20 07:40:44.411950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.702 [2024-11-20 07:40:44.411955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.411966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.411971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.411981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.411987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.411997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.703 [2024-11-20 07:40:44.412322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.703 [2024-11-20 07:40:44.412338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.412397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.412403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.413245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.413256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.413267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.413273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.413283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.413291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.413302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.413307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.413317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.413323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.413333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.413338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.413349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.413354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.413365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.413371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.413381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.413387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.413397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.703 [2024-11-20 07:40:44.413402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.703 [2024-11-20 07:40:44.413413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.704 [2024-11-20 07:40:44.413513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.704 [2024-11-20 07:40:44.413529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.413712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.704 [2024-11-20 07:40:44.413717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.414355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.704 [2024-11-20 07:40:44.414367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.414379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.704 [2024-11-20 07:40:44.414385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.414395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.704 [2024-11-20 07:40:44.414400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.414411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.704 [2024-11-20 07:40:44.414416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.414426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.704 [2024-11-20 07:40:44.414432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.414443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.704 [2024-11-20 07:40:44.414448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.414459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.704 [2024-11-20 07:40:44.414464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.414474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.704 [2024-11-20 07:40:44.414480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.414490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.704 [2024-11-20 07:40:44.414496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.414506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.704 [2024-11-20 07:40:44.414513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.414523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.704 [2024-11-20 07:40:44.414528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.414539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.704 [2024-11-20 07:40:44.414544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.704 [2024-11-20 07:40:44.414555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.704 [2024-11-20 07:40:44.414560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.414575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.414591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.414606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.414622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.414757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.414774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.414789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.414805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.414823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.414839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.414854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.414870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.414885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.414901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.414916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.414932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.414947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.414963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.414978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.414989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.414994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.415012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.415028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.415044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.415557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.415574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.415590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.415605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.415621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.415637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.415654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.415669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.415685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.415700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.415718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.705 [2024-11-20 07:40:44.415734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.415754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.415770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.705 [2024-11-20 07:40:44.415786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.705 [2024-11-20 07:40:44.415796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.415802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.415812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.415818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.415828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.415834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.415845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.415851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.415997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.416004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.416015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.416021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.416031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.416037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.416049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.416055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.416065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.416070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.416081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.416087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.416097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.416102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.416113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.416118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.416128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.416134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.416144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.416149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.416160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.416165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.416175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.416181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.416192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.416197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.416207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.416212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.416223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.416228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.416239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.416245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.416256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.416261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.417324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.417342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.417359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.417374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.417392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.417408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.417424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.417440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.417457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.417473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.417492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.417508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.417525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.417541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.417560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.417576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.706 [2024-11-20 07:40:44.417592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.706 [2024-11-20 07:40:44.417603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.706 [2024-11-20 07:40:44.417608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.417621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.417628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.417639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.417644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.417656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.417661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.417672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.417679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.417689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.417695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.417707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.417712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.417724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.417729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.417740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.417750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.418024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.418041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.418056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.418072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.418088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.418104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.418120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.418136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.418152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.418172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.418187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.418203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.418219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.418235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.418250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.418265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.418281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.418297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.418307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.418313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.419884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.419898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.419910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.419916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.419927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.419935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.419946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.707 [2024-11-20 07:40:44.419951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.419963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.419968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.419979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.419984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.419995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.420000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.420011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.420016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.420027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.420032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.420043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.420048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.420058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.420063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.707 [2024-11-20 07:40:44.420074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.707 [2024-11-20 07:40:44.420079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.420096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.420111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.420129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.420146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.708 [2024-11-20 07:40:44.420161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.708 [2024-11-20 07:40:44.420176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.708 [2024-11-20 07:40:44.420192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.420208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.420224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.420239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.420255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.420270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.420286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.420301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.420317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.420334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.708 [2024-11-20 07:40:44.420350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.708 [2024-11-20 07:40:44.420365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.420376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.420381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.430242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.430264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.430278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.430285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.430297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.430303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.430316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.430322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.430334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.430341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.430353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.708 [2024-11-20 07:40:44.430360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.430372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.708 [2024-11-20 07:40:44.430378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.430391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.708 [2024-11-20 07:40:44.430397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.430413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.708 [2024-11-20 07:40:44.430420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.432367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.432384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.432400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.432406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.432419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.432425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.432437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.432443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.432456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.432462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.432474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.432481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.432493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.432499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.432511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.432517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.432529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.708 [2024-11-20 07:40:44.432536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.432548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.708 [2024-11-20 07:40:44.432554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.708 [2024-11-20 07:40:44.432566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.432573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.432594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.432614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-20 07:40:44.432632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-20 07:40:44.432650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-20 07:40:44.432668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.432686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.432705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.432723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.432743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.432767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-20 07:40:44.432785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.432803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.432824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.432842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.432861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-20 07:40:44.432879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-20 07:40:44.432897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.432915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-20 07:40:44.432934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-20 07:40:44.432953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-20 07:40:44.432971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.432983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-20 07:40:44.432989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.433001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-20 07:40:44.433007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.433019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.433025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.433037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-20 07:40:44.433043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.433057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-20 07:40:44.433063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.433075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-20 07:40:44.433081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.433093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.433099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.433111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.433118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.433130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.433136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.433759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.433771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.433785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.709 [2024-11-20 07:40:44.433792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.433804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.709 [2024-11-20 07:40:44.433810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.709 [2024-11-20 07:40:44.433822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.433829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.433841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.433848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.433860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.433866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.433878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.433884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.433898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.433905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.433916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.433923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.433935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.433941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.433953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.433959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.433971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.433977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.433990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.433996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.434008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-20 07:40:44.434014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.434026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-20 07:40:44.434034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.434046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-20 07:40:44.434052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.434064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-20 07:40:44.434070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.434082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-20 07:40:44.434088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.434101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-20 07:40:44.434107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.434392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.434404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.434417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.434424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.434436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.434442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.434454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.434460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.434473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.434479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.434490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.434496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.434509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.434515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.434527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-20 07:40:44.434533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.434546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-20 07:40:44.434552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.435702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.435716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.435729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.435736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.435754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-20 07:40:44.435760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.435773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-20 07:40:44.435783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.435794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.435800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.435813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-20 07:40:44.435819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.435831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-20 07:40:44.435837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.435849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.435856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.435868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.435874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.435886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.435892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.435905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.435911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.435923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.435929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.435941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.710 [2024-11-20 07:40:44.435947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.435959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.710 [2024-11-20 07:40:44.435965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.710 [2024-11-20 07:40:44.435978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.435983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.435995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.436001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.436022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.436041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.436059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.436078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.436096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.436114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.436133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.436151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.436169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.436187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.436206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.436224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.436250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.436268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.436286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.436304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.436323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.436342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.436354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.436360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.437458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.437472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.437486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.437493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.437505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.437511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.437523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.437529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.437542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.437548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.437563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.437569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.437581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.437588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.437600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.437606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.437617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.437623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.437636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.437642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.437654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.437660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.438624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.438636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.438649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.438656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.438668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.438674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.438687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.438694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.438706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.711 [2024-11-20 07:40:44.438712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.438724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.438731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.711 [2024-11-20 07:40:44.438743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.711 [2024-11-20 07:40:44.438756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.438768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.438775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.438787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.438793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.438806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.438812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.438824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.438830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.438842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.438849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.438860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.438866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.438878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.438885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.438897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.438903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.438915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.438923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.438937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.438944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.438957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.438962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.438975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.438985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.438997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.439003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.439021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.439040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.439058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.439076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.439094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.439112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.439130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.439149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.439167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.439185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.439203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.439223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.439242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.439260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.439278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.439296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.439315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.439333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.439352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.439370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.712 [2024-11-20 07:40:44.439389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.439407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.439425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.439440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.439446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.440832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.440847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.712 [2024-11-20 07:40:44.440861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.712 [2024-11-20 07:40:44.440868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.440880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.440886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.440898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.440904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.440917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.440923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.440935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.440941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.440953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-20 07:40:44.440959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.440972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-20 07:40:44.440979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.440991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.440997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.441009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.441015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.441027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.441033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.441045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.441054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.441067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.441073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.441085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.441091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.441103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-20 07:40:44.441109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.441121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-20 07:40:44.441128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.441140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.441146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.441158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.441164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.441176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.441182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.441194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-20 07:40:44.441200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.442339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.442353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.442367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-20 07:40:44.442374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.442387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-20 07:40:44.442394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.442406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-20 07:40:44.442415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.442427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.442433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.442445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-20 07:40:44.442451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.442463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.442470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.442482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.442488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.442500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-20 07:40:44.442506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.442518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-20 07:40:44.442524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.442536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-20 07:40:44.442542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.442555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-20 07:40:44.442561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.442572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-20 07:40:44.442579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.442591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.713 [2024-11-20 07:40:44.442597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.442609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.442615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.713 [2024-11-20 07:40:44.442627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.713 [2024-11-20 07:40:44.442633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-20 07:40:44.442652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-20 07:40:44.442670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-20 07:40:44.442689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-20 07:40:44.442707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-20 07:40:44.442725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-20 07:40:44.442743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-20 07:40:44.442766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.442785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.442803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-20 07:40:44.442821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-20 07:40:44.442840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-20 07:40:44.442858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-20 07:40:44.442879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.442898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.442918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.442937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.442957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.442975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.442987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.442993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.443005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.443011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.443023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-20 07:40:44.443029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.443041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.443048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.443060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.443066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.443078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.443084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.443096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-20 07:40:44.443105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.443118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.443125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.443137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-20 07:40:44.443143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.445143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-20 07:40:44.445158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.445171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-20 07:40:44.445178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.445190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.445196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.445208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.445214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.445226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.445232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.445244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.445250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.445262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.445269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.445280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.445287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.445299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.445305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.445317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.445325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.445338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.714 [2024-11-20 07:40:44.445344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.714 [2024-11-20 07:40:44.445356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.714 [2024-11-20 07:40:44.445362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-20 07:40:44.445562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-20 07:40:44.445635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-20 07:40:44.445710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-20 07:40:44.445768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-20 07:40:44.445788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-20 07:40:44.445806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-20 07:40:44.445824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-20 07:40:44.445842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-20 07:40:44.445860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-20 07:40:44.445878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-20 07:40:44.445896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.445944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.445950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.446442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.446454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.446469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-20 07:40:44.446476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.446490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.715 [2024-11-20 07:40:44.446501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.446514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.446522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.446537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.446545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.446558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.715 [2024-11-20 07:40:44.446565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.715 [2024-11-20 07:40:44.446577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.446583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.446596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.446602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.446615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.446621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.446632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.446639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.446650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.446657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.446669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.446675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.446687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.446693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.446704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.446711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.446723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.446730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.446742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.446752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.446764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.446770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.446782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.446788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.446800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.446806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.446818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.446824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.447371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.447392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.447411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.447430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.447450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.447468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.447488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.447509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.447527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.447545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.447564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.447582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.447601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.447620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.447638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.447656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.447674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.447692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.447714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.447735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.447758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.447776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.716 [2024-11-20 07:40:44.447793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.447812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.447824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.716 [2024-11-20 07:40:44.447830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.716 [2024-11-20 07:40:44.449178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.449192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.449211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-20 07:40:44.449230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.449248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.449266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-20 07:40:44.449284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-20 07:40:44.449307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-20 07:40:44.449325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-20 07:40:44.449344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-20 07:40:44.449362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.449380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-20 07:40:44.449398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.449416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.449434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-20 07:40:44.449452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.449471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.449489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.449507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.449526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-20 07:40:44.449544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-20 07:40:44.449562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.449580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.449598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.449616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.449634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.449652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-20 07:40:44.449670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-20 07:40:44.449688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.449700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.449706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.451602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.451616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.451629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.451637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.451648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.451654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.451665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.717 [2024-11-20 07:40:44.451670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.451681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-20 07:40:44.451686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.451697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-20 07:40:44.451702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.451712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.717 [2024-11-20 07:40:44.451718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.717 [2024-11-20 07:40:44.451728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-20 07:40:44.451734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.451744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-20 07:40:44.451754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.451764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-20 07:40:44.451770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.451780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-20 07:40:44.451795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.451805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-20 07:40:44.451811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.451821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-20 07:40:44.451826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.451837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-20 07:40:44.451842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.451853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-20 07:40:44.451859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.451869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.451874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.451885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.451890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.451900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.451905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.451916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-20 07:40:44.451921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.451931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-20 07:40:44.451937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.451947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.451952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.451963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.451968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.451978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-20 07:40:44.451983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.451994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-20 07:40:44.451999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-20 07:40:44.452015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-20 07:40:44.452030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.452047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.452063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.452078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-20 07:40:44.452094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.452109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.452125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.452142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-20 07:40:44.452158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.452174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.452190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.452206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.452222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.718 [2024-11-20 07:40:44.452240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.452256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.452272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.452283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.452289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.453024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.718 [2024-11-20 07:40:44.453036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.718 [2024-11-20 07:40:44.453048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.453054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.453070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.453086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.453102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.453117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.453133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-20 07:40:44.453148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-20 07:40:44.453167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.453182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.453198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.453214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-20 07:40:44.453448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-20 07:40:44.453464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.453480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.453496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.453511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.453527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.453543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.453558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.453569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.453574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.454400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-20 07:40:44.454416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-20 07:40:44.454432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.454447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.454462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.454477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.454492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.454508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-20 07:40:44.454523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-20 07:40:44.454538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.454554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-20 07:40:44.454569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.454587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.454603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-20 07:40:44.454619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.454635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-20 07:40:44.454650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.719 [2024-11-20 07:40:44.454668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.719 [2024-11-20 07:40:44.454679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.719 [2024-11-20 07:40:44.454684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.454695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.454700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.454710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.454716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.454726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.454731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.454741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.454751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.454761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.454766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.454777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.454784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.454795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.454800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.454810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.454815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.454826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.454831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.454841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.454847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.454857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.454862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.454873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.454878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.454888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.454893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.454903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.454909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.454919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.454924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.454934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.454940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.454950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.454957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.455654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.455666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.455678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.455684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.455695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.455700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.455711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.455716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.455726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.455732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.455743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.455753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.455763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.455769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.456903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.456915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.456927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.456932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.456942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.456948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.456958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.456963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.456973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.456978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.456988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.456993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.457005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.457011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.457021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.457026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.457036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.457041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.457051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.457056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.457066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.457071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.457082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.457087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.457097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.720 [2024-11-20 07:40:44.457102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.457112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.720 [2024-11-20 07:40:44.457118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.720 [2024-11-20 07:40:44.457128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-20 07:40:44.457133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.457149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-20 07:40:44.457164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-20 07:40:44.457180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-20 07:40:44.457198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.457214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.457229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-20 07:40:44.457245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-20 07:40:44.457261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-20 07:40:44.457276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.457292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.457308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.457323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.457339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.457355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-20 07:40:44.457899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.457918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-20 07:40:44.457934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.457950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.457966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.457981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.457992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.457997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-20 07:40:44.458013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.458028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.458044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.458059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.458075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-20 07:40:44.458091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-20 07:40:44.458108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-20 07:40:44.458123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.458139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-20 07:40:44.458155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.458170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.458186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.458202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.458472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-20 07:40:44.458489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.721 [2024-11-20 07:40:44.458506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.721 [2024-11-20 07:40:44.458525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.721 [2024-11-20 07:40:44.458536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.458542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.458554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.458561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.458576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.458583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.458595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.458602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.458614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.458619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.459458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.459477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.459496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.459513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.459529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-20 07:40:44.459547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-20 07:40:44.459566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-20 07:40:44.459583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-20 07:40:44.459599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.459617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-20 07:40:44.459633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-20 07:40:44.459649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.459667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.459683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.459698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-20 07:40:44.459715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-20 07:40:44.459730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.459751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-20 07:40:44.459768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.459784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.459800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-20 07:40:44.459817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-20 07:40:44.459833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-20 07:40:44.459850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-20 07:40:44.459867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-20 07:40:44.459882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-20 07:40:44.459897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.459913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.722 [2024-11-20 07:40:44.459928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.722 [2024-11-20 07:40:44.459939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.722 [2024-11-20 07:40:44.459944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.459954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-20 07:40:44.459960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.459970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-20 07:40:44.459975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.459985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.459991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.460002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-20 07:40:44.460009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.460019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-20 07:40:44.460025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.460035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.460040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.460051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-20 07:40:44.460056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.460066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-20 07:40:44.460071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.460082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-20 07:40:44.460087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.460616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-20 07:40:44.460626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.460637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.460643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.460653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.460659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.460669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.460674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.460685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.460690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.461157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-20 07:40:44.461166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.461178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-20 07:40:44.461186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.461196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.461201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.461212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.461217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.461228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.461233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.461243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.461248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.461259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.461265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.461275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.461281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.461292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.461298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.462211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.462222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.462233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.462239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.462249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.462254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.462264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.462270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.462280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.462285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.462297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.462302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.462313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.462317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.462327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-20 07:40:44.462332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.462343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-20 07:40:44.462348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.462358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.462363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.462373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-20 07:40:44.462378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.462388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.462394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.462404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-20 07:40:44.462409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.462419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.462424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.462434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.723 [2024-11-20 07:40:44.462439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.723 [2024-11-20 07:40:44.462450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.723 [2024-11-20 07:40:44.462455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-20 07:40:44.462502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-20 07:40:44.462565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-20 07:40:44.462691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-20 07:40:44.462706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-20 07:40:44.462758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.462872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-20 07:40:44.462887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-20 07:40:44.462903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.462914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-20 07:40:44.462919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.464549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.464564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.464577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.724 [2024-11-20 07:40:44.464582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.464593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-20 07:40:44.464599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.464609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-20 07:40:44.464614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.464624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-20 07:40:44.464630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.464640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-20 07:40:44.464645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.464656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-20 07:40:44.464661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.464671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-20 07:40:44.464678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.464688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.724 [2024-11-20 07:40:44.464693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.724 [2024-11-20 07:40:44.464706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-20 07:40:44.464711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.464723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-20 07:40:44.464728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.464739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-20 07:40:44.464744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.464759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-20 07:40:44.464765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.464777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-20 07:40:44.464782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.464792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.464798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.464808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.464813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.464824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.464830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.464840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-20 07:40:44.464846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.464857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-20 07:40:44.464862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.464873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-20 07:40:44.464878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.464889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.464894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-20 07:40:44.465346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-20 07:40:44.465363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-20 07:40:44.465379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-20 07:40:44.465520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-20 07:40:44.465618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-20 07:40:44.465698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.725 [2024-11-20 07:40:44.465713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.725 [2024-11-20 07:40:44.465761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.725 [2024-11-20 07:40:44.465766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.465777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-20 07:40:44.465782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-20 07:40:44.466151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-20 07:40:44.466168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-20 07:40:44.466387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-20 07:40:44.466758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.466784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.726 [2024-11-20 07:40:44.466789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.467656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-20 07:40:44.467666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.467677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-20 07:40:44.467683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.467694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-20 07:40:44.467699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.467709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-20 07:40:44.467715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.467725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-20 07:40:44.467730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.467740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-20 07:40:44.467749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.467760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-20 07:40:44.467765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.467775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.726 [2024-11-20 07:40:44.467780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.726 [2024-11-20 07:40:44.467795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.467802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.467812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.467817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.467827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.467833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.467844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.467851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.467862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.467868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.467879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.467885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.467895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.467901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.467911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.467916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.467926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.467932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.467942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.467947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.467958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.467964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.467975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.467980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.467990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.467997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.468007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.468013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.468023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.468028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.468039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.468044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.468054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.468060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.468070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.468075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.468086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.468091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.468101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.468106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.468117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.468122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.468132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.468138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.468149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.468155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.468166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.468172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.468182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.468189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.468199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.468204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.468215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.468220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.468230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.468235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.468246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.468251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.468262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.468267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.469932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.469946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.469957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.469963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.469973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.469979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.469989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.469994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.470004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.470009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.470019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.727 [2024-11-20 07:40:44.470024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.470034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.470039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.727 [2024-11-20 07:40:44.470052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.727 [2024-11-20 07:40:44.470058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-20 07:40:44.470073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-20 07:40:44.470089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-20 07:40:44.470105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-20 07:40:44.470120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-20 07:40:44.470135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-20 07:40:44.470150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.470166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.470181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.470197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.470213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.470228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.470245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.470261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.470276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.470293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.470308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-20 07:40:44.470323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.470339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.470355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-20 07:40:44.470372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.470390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.470409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-20 07:40:44.470427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-20 07:40:44.470446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-20 07:40:44.470465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.470484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.470495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.470500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.471071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-20 07:40:44.471083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.471095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-20 07:40:44.471100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.471111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.471116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.471128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.471134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.471144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.471149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.471160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.471166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.471176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-20 07:40:44.471181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.471192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-20 07:40:44.471198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.471209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.728 [2024-11-20 07:40:44.471217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.471230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.471236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.728 [2024-11-20 07:40:44.471561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.728 [2024-11-20 07:40:44.471572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.471585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-20 07:40:44.471591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.471603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-20 07:40:44.471612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.471624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.471631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.471644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.471652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.471663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.471669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.471679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.471686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.471698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.471704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.471714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.471719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.471730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.471736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.472651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.472668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.472684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-20 07:40:44.472700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-20 07:40:44.472717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-20 07:40:44.472733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.472754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.472769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-20 07:40:44.472786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-20 07:40:44.472802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.472817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.472833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.472850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.472868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-20 07:40:44.472884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-20 07:40:44.472899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-20 07:40:44.472916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-20 07:40:44.472932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-20 07:40:44.472947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-20 07:40:44.472963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.472978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.472989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-20 07:40:44.472994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.473005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.473011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.473022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-20 07:40:44.473028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.473038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.473044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.473055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.729 [2024-11-20 07:40:44.473061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.473071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-20 07:40:44.473076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.473087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.729 [2024-11-20 07:40:44.473092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.729 [2024-11-20 07:40:44.473102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.473108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.473118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.473123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.473134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.473139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.473149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.473154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.473164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.473170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.473180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.473186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.473196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.473201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.473211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.473217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.473227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.473233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.473836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.473848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.473859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.473865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.473875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.473881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.473891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.473896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.473907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.473912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.473922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.473928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.473938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.473943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.474145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.474167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.474186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.474205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.474223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.474241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.474258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.474274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.474290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.474586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.474602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.474619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.474636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.474652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.474667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.474683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.474699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.474719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.474737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.474758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.474773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.730 [2024-11-20 07:40:44.474789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.730 [2024-11-20 07:40:44.474801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.730 [2024-11-20 07:40:44.474806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-20 07:40:44.476028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-20 07:40:44.476123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-20 07:40:44.476172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-20 07:40:44.476279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-20 07:40:44.476294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-20 07:40:44.476342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-20 07:40:44.476357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-20 07:40:44.476419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-20 07:40:44.476435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-20 07:40:44.476451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-20 07:40:44.476484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-20 07:40:44.476499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.731 [2024-11-20 07:40:44.476525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-20 07:40:44.476541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.731 [2024-11-20 07:40:44.476553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.731 [2024-11-20 07:40:44.476558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.476568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.476573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.476584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.476589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.476600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.476605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.476615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-20 07:40:44.476622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.476633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-20 07:40:44.476638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.477020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-20 07:40:44.477037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-20 07:40:44.477053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-20 07:40:44.477069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-20 07:40:44.477087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-20 07:40:44.477102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-20 07:40:44.477118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-20 07:40:44.477133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-20 07:40:44.477148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-20 07:40:44.477164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-20 07:40:44.477179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-20 07:40:44.477195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.477210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.477226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.477242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.477257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.477274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.477290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.477300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.477306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.478612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-20 07:40:44.478626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.478637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.478643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.478653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.478658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.478668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.478673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.478684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.478689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.478699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.478704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.478714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.478719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.478729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.478734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.478749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-20 07:40:44.478754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.478764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.478770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.478783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.478788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.478798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.478803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.478813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.732 [2024-11-20 07:40:44.478818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.478829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-11-20 07:40:44.478834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.732 [2024-11-20 07:40:44.478844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.478849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.478859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-20 07:40:44.478865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.478875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.478880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.478890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.478895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.478906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.478911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.478921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-20 07:40:44.478926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.478937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.478942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.478952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-20 07:40:44.478957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.478969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.478974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.478984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-20 07:40:44.478990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.479005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.479021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.479036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.479051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.479067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.479082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-20 07:40:44.479098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-20 07:40:44.479114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-20 07:40:44.479130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-20 07:40:44.479623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.479642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.479658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.479673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-20 07:40:44.479689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-20 07:40:44.479705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-20 07:40:44.479721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-20 07:40:44.479736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.479757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.479772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.479788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.479803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.479819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.479836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.479852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.733 [2024-11-20 07:40:44.479868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.479878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-20 07:40:44.479883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.480886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-20 07:40:44.480899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.480912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-20 07:40:44.480918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.733 [2024-11-20 07:40:44.480929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-11-20 07:40:44.480934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.480945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.480950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.480960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.480965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.480975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.480980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.480990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.480995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.481010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.481028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.481043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.481058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.481073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.481088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.481103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.481119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.481134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.481149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.481164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.481179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.481194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.481210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.481227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.481242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.481258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.481273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.481289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.481304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.481320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.481335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.481351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.481366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.481382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.481397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.481414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.481431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.481446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.734 [2024-11-20 07:40:44.481462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.481477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.481493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.481503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.481509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.482379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.734 [2024-11-20 07:40:44.482392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.734 [2024-11-20 07:40:44.482738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-20 07:40:44.482753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.735 [2024-11-20 07:40:44.482764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-20 07:40:44.482770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.735 [2024-11-20 07:40:44.482781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-20 07:40:44.482786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.735 [2024-11-20 07:40:44.482796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-20 07:40:44.482802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.735 [2024-11-20 07:40:44.482815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-20 07:40:44.482820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:28.735 [2024-11-20 07:40:44.482831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-20 07:40:44.482836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:28.735 [2024-11-20 07:40:44.482846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-20 07:40:44.482851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:28.735 [2024-11-20 07:40:44.482861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-20 07:40:44.482867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.735 [2024-11-20 07:40:44.482877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-20 07:40:44.482882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.735 [2024-11-20 07:40:44.482893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.735 [2024-11-20 07:40:44.482898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:28.735 [2024-11-20 07:40:44.482909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.735 [2024-11-20 07:40:44.482914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.735 11796.56 IOPS, 46.08 MiB/s [2024-11-20T06:40:46.945Z] 11831.27 IOPS, 46.22 MiB/s [2024-11-20T06:40:46.945Z] Received shutdown signal, test time was about 26.903572 seconds 00:26:28.735 00:26:28.735 Latency(us) 00:26:28.735 [2024-11-20T06:40:46.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.735 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:28.735 Verification LBA range: start 0x0 length 0x4000 00:26:28.735 Nvme0n1 : 26.90 11863.12 46.34 0.00 0.00 10769.44 206.51 3019898.88 00:26:28.735 [2024-11-20T06:40:46.945Z] =================================================================================================================== 00:26:28.735 [2024-11-20T06:40:46.945Z] Total : 11863.12 46.34 0.00 0.00 10769.44 206.51 3019898.88 00:26:28.735 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:28.996 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:28.996 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:28.996 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:28.996 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:28.996 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:28.996 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:28.996 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:28.996 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:28.996 07:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:28.996 rmmod nvme_tcp 00:26:28.996 rmmod nvme_fabrics 00:26:28.996 rmmod nvme_keyring 00:26:28.996 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:28.996 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:28.996 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:28.996 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3528591 ']' 00:26:28.996 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3528591 00:26:28.996 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3528591 ']' 00:26:28.996 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3528591 00:26:28.996 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:28.996 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:28.996 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3528591 00:26:28.996 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:28.996 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:28.996 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3528591' 00:26:28.996 killing process with pid 3528591 00:26:28.996 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3528591 00:26:28.996 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3528591 00:26:29.259 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:29.259 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:29.259 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:29.259 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:29.259 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:29.259 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:29.259 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:29.259 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:29.259 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:29.259 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.259 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.259 07:40:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.239 07:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:31.239 00:26:31.239 real 0m41.714s 00:26:31.239 user 1m47.924s 00:26:31.239 sys 0m11.590s 00:26:31.239 07:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:31.239 07:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:31.239 ************************************ 00:26:31.239 END TEST nvmf_host_multipath_status 00:26:31.239 ************************************ 00:26:31.239 07:40:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:31.239 07:40:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:31.239 07:40:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:31.239 07:40:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.239 ************************************ 00:26:31.239 START TEST nvmf_discovery_remove_ifc 00:26:31.239 ************************************ 00:26:31.239 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:31.501 * Looking for test storage... 00:26:31.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:31.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.501 --rc genhtml_branch_coverage=1 00:26:31.501 --rc genhtml_function_coverage=1 00:26:31.501 --rc genhtml_legend=1 00:26:31.501 --rc geninfo_all_blocks=1 00:26:31.501 --rc geninfo_unexecuted_blocks=1 00:26:31.501 00:26:31.501 ' 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:31.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.501 --rc genhtml_branch_coverage=1 00:26:31.501 --rc genhtml_function_coverage=1 00:26:31.501 --rc genhtml_legend=1 00:26:31.501 --rc geninfo_all_blocks=1 00:26:31.501 --rc geninfo_unexecuted_blocks=1 00:26:31.501 00:26:31.501 ' 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:31.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.501 --rc genhtml_branch_coverage=1 00:26:31.501 --rc genhtml_function_coverage=1 00:26:31.501 --rc genhtml_legend=1 00:26:31.501 --rc geninfo_all_blocks=1 00:26:31.501 --rc geninfo_unexecuted_blocks=1 00:26:31.501 00:26:31.501 ' 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:31.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.501 --rc genhtml_branch_coverage=1 00:26:31.501 --rc genhtml_function_coverage=1 00:26:31.501 --rc genhtml_legend=1 00:26:31.501 --rc geninfo_all_blocks=1 00:26:31.501 --rc geninfo_unexecuted_blocks=1 00:26:31.501 00:26:31.501 ' 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:31.501 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:31.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:31.502 07:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:39.646 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:39.646 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:39.646 Found net devices under 0000:31:00.0: cvl_0_0 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:39.646 Found net devices under 0000:31:00.1: cvl_0_1 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:39.646 07:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:39.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:26:39.646 00:26:39.646 --- 10.0.0.2 ping statistics --- 00:26:39.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.646 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:26:39.646 00:26:39.646 --- 10.0.0.1 ping statistics --- 00:26:39.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.646 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3538900 00:26:39.646 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3538900 00:26:39.647 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:39.647 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3538900 ']' 00:26:39.647 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.647 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:39.647 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.647 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:39.647 07:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.647 [2024-11-20 07:40:57.230894] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:26:39.647 [2024-11-20 07:40:57.230970] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.647 [2024-11-20 07:40:57.332071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.647 [2024-11-20 07:40:57.382560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.647 [2024-11-20 07:40:57.382611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.647 [2024-11-20 07:40:57.382620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.647 [2024-11-20 07:40:57.382628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.647 [2024-11-20 07:40:57.382634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.647 [2024-11-20 07:40:57.383485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.907 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:39.907 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:39.907 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:39.907 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:39.907 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.907 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.907 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:39.907 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.907 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.907 [2024-11-20 07:40:58.108511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.169 [2024-11-20 07:40:58.116841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:40.169 null0 00:26:40.169 [2024-11-20 07:40:58.148725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.169 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.169 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3539237 00:26:40.169 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:40.169 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3539237 /tmp/host.sock 00:26:40.169 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3539237 ']' 00:26:40.169 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:26:40.169 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:40.169 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:40.169 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:40.169 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:40.169 07:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.169 [2024-11-20 07:40:58.227676] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:26:40.169 [2024-11-20 07:40:58.227737] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3539237 ] 00:26:40.169 [2024-11-20 07:40:58.320753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.430 [2024-11-20 07:40:58.374836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.004 07:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:41.004 07:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:41.004 07:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:41.004 07:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:41.004 07:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.004 07:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.004 07:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.004 07:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:41.004 07:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.004 07:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.004 07:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.004 07:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:41.004 07:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.004 07:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.394 [2024-11-20 07:41:00.217968] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:42.394 [2024-11-20 07:41:00.217994] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:42.394 [2024-11-20 07:41:00.218008] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:42.394 [2024-11-20 07:41:00.344395] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:42.394 [2024-11-20 07:41:00.530590] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:42.394 [2024-11-20 07:41:00.531499] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x13f4550:1 started. 00:26:42.394 [2024-11-20 07:41:00.533075] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:42.394 [2024-11-20 07:41:00.533113] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:42.394 [2024-11-20 07:41:00.533133] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:42.394 [2024-11-20 07:41:00.533147] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:42.394 [2024-11-20 07:41:00.533169] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:42.394 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.394 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:42.394 [2024-11-20 07:41:00.536696] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x13f4550 was disconnected and freed. delete nvme_qpair. 00:26:42.394 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.394 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.394 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.394 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.394 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.394 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.394 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.394 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.395 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:42.395 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:42.395 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:42.655 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:42.655 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.655 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.655 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.655 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.655 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.655 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.655 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.655 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.655 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:42.655 07:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:43.601 07:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.601 07:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.601 07:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.601 07:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.601 07:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.601 07:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.601 07:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.601 07:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.861 07:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:43.861 07:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:44.802 07:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:44.802 07:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.802 07:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:44.802 07:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.802 07:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:44.802 07:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.802 07:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:44.802 07:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.802 07:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:44.802 07:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:45.738 07:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.738 07:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.738 07:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.738 07:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.738 07:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.738 07:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.738 07:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.738 07:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.738 07:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:45.738 07:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:47.122 07:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:47.122 07:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.122 07:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:47.122 07:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.122 07:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:47.122 07:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.122 07:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:47.122 07:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.122 07:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:47.122 07:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:48.064 [2024-11-20 07:41:05.973774] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:48.064 [2024-11-20 07:41:05.973806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.064 [2024-11-20 07:41:05.973815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.064 [2024-11-20 07:41:05.973822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.064 [2024-11-20 07:41:05.973828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.064 [2024-11-20 07:41:05.973834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.064 [2024-11-20 07:41:05.973839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.064 [2024-11-20 07:41:05.973845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.064 [2024-11-20 07:41:05.973850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.064 [2024-11-20 07:41:05.973856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.064 [2024-11-20 07:41:05.973861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.064 [2024-11-20 07:41:05.973866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0ec0 is same with the state(6) to be set 00:26:48.064 [2024-11-20 07:41:05.983796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d0ec0 (9): Bad file descriptor 00:26:48.064 07:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:48.064 07:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.064 07:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:48.064 07:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.064 07:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:48.064 07:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.064 07:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:48.064 [2024-11-20 07:41:05.993832] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:48.064 [2024-11-20 07:41:05.993845] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:48.064 [2024-11-20 07:41:05.993849] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:48.064 [2024-11-20 07:41:05.993853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:48.064 [2024-11-20 07:41:05.993868] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:49.005 [2024-11-20 07:41:07.037831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:49.005 [2024-11-20 07:41:07.037929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d0ec0 with addr=10.0.0.2, port=4420 00:26:49.005 [2024-11-20 07:41:07.037962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0ec0 is same with the state(6) to be set 00:26:49.006 [2024-11-20 07:41:07.038024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d0ec0 (9): Bad file descriptor 00:26:49.006 [2024-11-20 07:41:07.039155] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:49.006 [2024-11-20 07:41:07.039227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:49.006 [2024-11-20 07:41:07.039250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:49.006 [2024-11-20 07:41:07.039273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:49.006 [2024-11-20 07:41:07.039295] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:49.006 [2024-11-20 07:41:07.039311] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:49.006 [2024-11-20 07:41:07.039325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.006 [2024-11-20 07:41:07.039347] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:49.006 [2024-11-20 07:41:07.039362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:49.006 07:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.006 07:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:49.006 07:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:49.948 [2024-11-20 07:41:08.041784] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:49.948 [2024-11-20 07:41:08.041800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:49.948 [2024-11-20 07:41:08.041809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:49.948 [2024-11-20 07:41:08.041814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:49.948 [2024-11-20 07:41:08.041820] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:49.948 [2024-11-20 07:41:08.041825] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:49.948 [2024-11-20 07:41:08.041829] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:49.948 [2024-11-20 07:41:08.041832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.948 [2024-11-20 07:41:08.041851] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:49.948 [2024-11-20 07:41:08.041869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.948 [2024-11-20 07:41:08.041880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.948 [2024-11-20 07:41:08.041888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.948 [2024-11-20 07:41:08.041894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.948 [2024-11-20 07:41:08.041899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.948 [2024-11-20 07:41:08.041905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.948 [2024-11-20 07:41:08.041910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.948 [2024-11-20 07:41:08.041915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.948 [2024-11-20 07:41:08.041921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.948 [2024-11-20 07:41:08.041926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.948 [2024-11-20 07:41:08.041932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:49.948 [2024-11-20 07:41:08.042302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c0600 (9): Bad file descriptor 00:26:49.948 [2024-11-20 07:41:08.043313] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:49.948 [2024-11-20 07:41:08.043322] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:49.948 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.948 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.948 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.948 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.948 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.948 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.948 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.948 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.948 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:49.948 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.948 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.210 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:50.210 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:50.210 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.210 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:50.210 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.210 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:50.210 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.210 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:50.210 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.210 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:50.210 07:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:51.150 07:41:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:51.150 07:41:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.150 07:41:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:51.150 07:41:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.150 07:41:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:51.150 07:41:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.150 07:41:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:51.150 07:41:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.150 07:41:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:51.150 07:41:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:52.091 [2024-11-20 07:41:10.102658] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:52.091 [2024-11-20 07:41:10.102675] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:52.091 [2024-11-20 07:41:10.102686] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:52.091 [2024-11-20 07:41:10.233062] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:52.091 [2024-11-20 07:41:10.290831] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:52.091 [2024-11-20 07:41:10.291559] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x13db510:1 started. 00:26:52.091 [2024-11-20 07:41:10.292454] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:52.091 [2024-11-20 07:41:10.292482] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:52.091 [2024-11-20 07:41:10.292497] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:52.091 [2024-11-20 07:41:10.292508] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:52.091 [2024-11-20 07:41:10.292514] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:52.352 [2024-11-20 07:41:10.300813] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x13db510 was disconnected and freed. delete nvme_qpair. 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3539237 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3539237 ']' 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3539237 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3539237 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3539237' 00:26:52.352 killing process with pid 3539237 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3539237 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3539237 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:52.352 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:52.352 rmmod nvme_tcp 00:26:52.612 rmmod nvme_fabrics 00:26:52.612 rmmod nvme_keyring 00:26:52.612 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:52.612 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:52.612 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:52.612 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3538900 ']' 00:26:52.612 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3538900 00:26:52.612 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3538900 ']' 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3538900 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3538900 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3538900' 00:26:52.613 killing process with pid 3538900 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3538900 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3538900 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.613 07:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.159 07:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:55.159 00:26:55.159 real 0m23.498s 00:26:55.159 user 0m27.453s 00:26:55.159 sys 0m7.206s 00:26:55.159 07:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:55.159 07:41:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.159 ************************************ 00:26:55.159 END TEST nvmf_discovery_remove_ifc 00:26:55.159 ************************************ 00:26:55.159 07:41:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:55.159 07:41:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:55.159 07:41:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:55.159 07:41:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.159 ************************************ 00:26:55.159 START TEST nvmf_identify_kernel_target 00:26:55.159 ************************************ 00:26:55.159 07:41:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:55.159 * Looking for test storage... 00:26:55.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:55.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.159 --rc genhtml_branch_coverage=1 00:26:55.159 --rc genhtml_function_coverage=1 00:26:55.159 --rc genhtml_legend=1 00:26:55.159 --rc geninfo_all_blocks=1 00:26:55.159 --rc geninfo_unexecuted_blocks=1 00:26:55.159 00:26:55.159 ' 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:55.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.159 --rc genhtml_branch_coverage=1 00:26:55.159 --rc genhtml_function_coverage=1 00:26:55.159 --rc genhtml_legend=1 00:26:55.159 --rc geninfo_all_blocks=1 00:26:55.159 --rc geninfo_unexecuted_blocks=1 00:26:55.159 00:26:55.159 ' 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:55.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.159 --rc genhtml_branch_coverage=1 00:26:55.159 --rc genhtml_function_coverage=1 00:26:55.159 --rc genhtml_legend=1 00:26:55.159 --rc geninfo_all_blocks=1 00:26:55.159 --rc geninfo_unexecuted_blocks=1 00:26:55.159 00:26:55.159 ' 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:55.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.159 --rc genhtml_branch_coverage=1 00:26:55.159 --rc genhtml_function_coverage=1 00:26:55.159 --rc genhtml_legend=1 00:26:55.159 --rc geninfo_all_blocks=1 00:26:55.159 --rc geninfo_unexecuted_blocks=1 00:26:55.159 00:26:55.159 ' 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.159 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:55.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:55.160 07:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:03.307 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:03.307 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:03.307 Found net devices under 0000:31:00.0: cvl_0_0 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:03.307 Found net devices under 0000:31:00.1: cvl_0_1 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:03.307 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:03.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:27:03.308 00:27:03.308 --- 10.0.0.2 ping statistics --- 00:27:03.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.308 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:03.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:27:03.308 00:27:03.308 --- 10.0.0.1 ping statistics --- 00:27:03.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.308 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:03.308 07:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:06.607 Waiting for block devices as requested 00:27:06.607 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:06.607 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:06.607 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:06.607 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:06.608 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:06.868 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:06.868 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:06.868 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:07.129 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:07.129 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:07.389 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:07.389 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:07.389 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:07.650 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:07.650 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:07.650 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:07.910 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:08.171 No valid GPT data, bailing 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:08.171 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:27:08.432 00:27:08.432 Discovery Log Number of Records 2, Generation counter 2 00:27:08.432 =====Discovery Log Entry 0====== 00:27:08.432 trtype: tcp 00:27:08.432 adrfam: ipv4 00:27:08.432 subtype: current discovery subsystem 00:27:08.432 treq: not specified, sq flow control disable supported 00:27:08.432 portid: 1 00:27:08.432 trsvcid: 4420 00:27:08.432 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:08.432 traddr: 10.0.0.1 00:27:08.432 eflags: none 00:27:08.432 sectype: none 00:27:08.432 =====Discovery Log Entry 1====== 00:27:08.432 trtype: tcp 00:27:08.432 adrfam: ipv4 00:27:08.432 subtype: nvme subsystem 00:27:08.432 treq: not specified, sq flow control disable supported 00:27:08.432 portid: 1 00:27:08.432 trsvcid: 4420 00:27:08.432 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:08.432 traddr: 10.0.0.1 00:27:08.432 eflags: none 00:27:08.432 sectype: none 00:27:08.432 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:08.432 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:08.432 ===================================================== 00:27:08.432 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:08.432 ===================================================== 00:27:08.432 Controller Capabilities/Features 00:27:08.432 ================================ 00:27:08.432 Vendor ID: 0000 00:27:08.432 Subsystem Vendor ID: 0000 00:27:08.432 Serial Number: 7aa9b49113908c789740 00:27:08.432 Model Number: Linux 00:27:08.432 Firmware Version: 6.8.9-20 00:27:08.432 Recommended Arb Burst: 0 00:27:08.432 IEEE OUI Identifier: 00 00 00 00:27:08.432 Multi-path I/O 00:27:08.432 May have multiple subsystem ports: No 00:27:08.432 May have multiple controllers: No 00:27:08.432 Associated with SR-IOV VF: No 00:27:08.432 Max Data Transfer Size: Unlimited 00:27:08.432 Max Number of Namespaces: 0 00:27:08.432 Max Number of I/O Queues: 1024 00:27:08.432 NVMe Specification Version (VS): 1.3 00:27:08.432 NVMe Specification Version (Identify): 1.3 00:27:08.432 Maximum Queue Entries: 1024 00:27:08.432 Contiguous Queues Required: No 00:27:08.432 Arbitration Mechanisms Supported 00:27:08.432 Weighted Round Robin: Not Supported 00:27:08.432 Vendor Specific: Not Supported 00:27:08.432 Reset Timeout: 7500 ms 00:27:08.432 Doorbell Stride: 4 bytes 00:27:08.432 NVM Subsystem Reset: Not Supported 00:27:08.432 Command Sets Supported 00:27:08.432 NVM Command Set: Supported 00:27:08.432 Boot Partition: Not Supported 00:27:08.432 Memory Page Size Minimum: 4096 bytes 00:27:08.432 Memory Page Size Maximum: 4096 bytes 00:27:08.432 Persistent Memory Region: Not Supported 00:27:08.432 Optional Asynchronous Events Supported 00:27:08.432 Namespace Attribute Notices: Not Supported 00:27:08.432 Firmware Activation Notices: Not Supported 00:27:08.432 ANA Change Notices: Not Supported 00:27:08.432 PLE Aggregate Log Change Notices: Not Supported 00:27:08.432 LBA Status Info Alert Notices: Not Supported 00:27:08.432 EGE Aggregate Log Change Notices: Not Supported 00:27:08.432 Normal NVM Subsystem Shutdown event: Not Supported 00:27:08.432 Zone Descriptor Change Notices: Not Supported 00:27:08.432 Discovery Log Change Notices: Supported 00:27:08.432 Controller Attributes 00:27:08.432 128-bit Host Identifier: Not Supported 00:27:08.432 Non-Operational Permissive Mode: Not Supported 00:27:08.432 NVM Sets: Not Supported 00:27:08.432 Read Recovery Levels: Not Supported 00:27:08.432 Endurance Groups: Not Supported 00:27:08.432 Predictable Latency Mode: Not Supported 00:27:08.432 Traffic Based Keep ALive: Not Supported 00:27:08.432 Namespace Granularity: Not Supported 00:27:08.432 SQ Associations: Not Supported 00:27:08.432 UUID List: Not Supported 00:27:08.432 Multi-Domain Subsystem: Not Supported 00:27:08.432 Fixed Capacity Management: Not Supported 00:27:08.432 Variable Capacity Management: Not Supported 00:27:08.432 Delete Endurance Group: Not Supported 00:27:08.432 Delete NVM Set: Not Supported 00:27:08.432 Extended LBA Formats Supported: Not Supported 00:27:08.432 Flexible Data Placement Supported: Not Supported 00:27:08.432 00:27:08.432 Controller Memory Buffer Support 00:27:08.432 ================================ 00:27:08.432 Supported: No 00:27:08.432 00:27:08.432 Persistent Memory Region Support 00:27:08.432 ================================ 00:27:08.432 Supported: No 00:27:08.432 00:27:08.432 Admin Command Set Attributes 00:27:08.432 ============================ 00:27:08.432 Security Send/Receive: Not Supported 00:27:08.432 Format NVM: Not Supported 00:27:08.432 Firmware Activate/Download: Not Supported 00:27:08.432 Namespace Management: Not Supported 00:27:08.432 Device Self-Test: Not Supported 00:27:08.432 Directives: Not Supported 00:27:08.432 NVMe-MI: Not Supported 00:27:08.432 Virtualization Management: Not Supported 00:27:08.432 Doorbell Buffer Config: Not Supported 00:27:08.432 Get LBA Status Capability: Not Supported 00:27:08.432 Command & Feature Lockdown Capability: Not Supported 00:27:08.432 Abort Command Limit: 1 00:27:08.432 Async Event Request Limit: 1 00:27:08.432 Number of Firmware Slots: N/A 00:27:08.432 Firmware Slot 1 Read-Only: N/A 00:27:08.432 Firmware Activation Without Reset: N/A 00:27:08.432 Multiple Update Detection Support: N/A 00:27:08.432 Firmware Update Granularity: No Information Provided 00:27:08.432 Per-Namespace SMART Log: No 00:27:08.432 Asymmetric Namespace Access Log Page: Not Supported 00:27:08.432 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:08.432 Command Effects Log Page: Not Supported 00:27:08.432 Get Log Page Extended Data: Supported 00:27:08.432 Telemetry Log Pages: Not Supported 00:27:08.432 Persistent Event Log Pages: Not Supported 00:27:08.432 Supported Log Pages Log Page: May Support 00:27:08.432 Commands Supported & Effects Log Page: Not Supported 00:27:08.432 Feature Identifiers & Effects Log Page:May Support 00:27:08.432 NVMe-MI Commands & Effects Log Page: May Support 00:27:08.432 Data Area 4 for Telemetry Log: Not Supported 00:27:08.432 Error Log Page Entries Supported: 1 00:27:08.433 Keep Alive: Not Supported 00:27:08.433 00:27:08.433 NVM Command Set Attributes 00:27:08.433 ========================== 00:27:08.433 Submission Queue Entry Size 00:27:08.433 Max: 1 00:27:08.433 Min: 1 00:27:08.433 Completion Queue Entry Size 00:27:08.433 Max: 1 00:27:08.433 Min: 1 00:27:08.433 Number of Namespaces: 0 00:27:08.433 Compare Command: Not Supported 00:27:08.433 Write Uncorrectable Command: Not Supported 00:27:08.433 Dataset Management Command: Not Supported 00:27:08.433 Write Zeroes Command: Not Supported 00:27:08.433 Set Features Save Field: Not Supported 00:27:08.433 Reservations: Not Supported 00:27:08.433 Timestamp: Not Supported 00:27:08.433 Copy: Not Supported 00:27:08.433 Volatile Write Cache: Not Present 00:27:08.433 Atomic Write Unit (Normal): 1 00:27:08.433 Atomic Write Unit (PFail): 1 00:27:08.433 Atomic Compare & Write Unit: 1 00:27:08.433 Fused Compare & Write: Not Supported 00:27:08.433 Scatter-Gather List 00:27:08.433 SGL Command Set: Supported 00:27:08.433 SGL Keyed: Not Supported 00:27:08.433 SGL Bit Bucket Descriptor: Not Supported 00:27:08.433 SGL Metadata Pointer: Not Supported 00:27:08.433 Oversized SGL: Not Supported 00:27:08.433 SGL Metadata Address: Not Supported 00:27:08.433 SGL Offset: Supported 00:27:08.433 Transport SGL Data Block: Not Supported 00:27:08.433 Replay Protected Memory Block: Not Supported 00:27:08.433 00:27:08.433 Firmware Slot Information 00:27:08.433 ========================= 00:27:08.433 Active slot: 0 00:27:08.433 00:27:08.433 00:27:08.433 Error Log 00:27:08.433 ========= 00:27:08.433 00:27:08.433 Active Namespaces 00:27:08.433 ================= 00:27:08.433 Discovery Log Page 00:27:08.433 ================== 00:27:08.433 Generation Counter: 2 00:27:08.433 Number of Records: 2 00:27:08.433 Record Format: 0 00:27:08.433 00:27:08.433 Discovery Log Entry 0 00:27:08.433 ---------------------- 00:27:08.433 Transport Type: 3 (TCP) 00:27:08.433 Address Family: 1 (IPv4) 00:27:08.433 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:08.433 Entry Flags: 00:27:08.433 Duplicate Returned Information: 0 00:27:08.433 Explicit Persistent Connection Support for Discovery: 0 00:27:08.433 Transport Requirements: 00:27:08.433 Secure Channel: Not Specified 00:27:08.433 Port ID: 1 (0x0001) 00:27:08.433 Controller ID: 65535 (0xffff) 00:27:08.433 Admin Max SQ Size: 32 00:27:08.433 Transport Service Identifier: 4420 00:27:08.433 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:08.433 Transport Address: 10.0.0.1 00:27:08.433 Discovery Log Entry 1 00:27:08.433 ---------------------- 00:27:08.433 Transport Type: 3 (TCP) 00:27:08.433 Address Family: 1 (IPv4) 00:27:08.433 Subsystem Type: 2 (NVM Subsystem) 00:27:08.433 Entry Flags: 00:27:08.433 Duplicate Returned Information: 0 00:27:08.433 Explicit Persistent Connection Support for Discovery: 0 00:27:08.433 Transport Requirements: 00:27:08.433 Secure Channel: Not Specified 00:27:08.433 Port ID: 1 (0x0001) 00:27:08.433 Controller ID: 65535 (0xffff) 00:27:08.433 Admin Max SQ Size: 32 00:27:08.433 Transport Service Identifier: 4420 00:27:08.433 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:08.433 Transport Address: 10.0.0.1 00:27:08.433 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:08.694 get_feature(0x01) failed 00:27:08.694 get_feature(0x02) failed 00:27:08.694 get_feature(0x04) failed 00:27:08.694 ===================================================== 00:27:08.694 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:08.694 ===================================================== 00:27:08.694 Controller Capabilities/Features 00:27:08.694 ================================ 00:27:08.694 Vendor ID: 0000 00:27:08.694 Subsystem Vendor ID: 0000 00:27:08.694 Serial Number: b728e0a282883ff45f3c 00:27:08.694 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:08.694 Firmware Version: 6.8.9-20 00:27:08.694 Recommended Arb Burst: 6 00:27:08.694 IEEE OUI Identifier: 00 00 00 00:27:08.694 Multi-path I/O 00:27:08.694 May have multiple subsystem ports: Yes 00:27:08.694 May have multiple controllers: Yes 00:27:08.694 Associated with SR-IOV VF: No 00:27:08.694 Max Data Transfer Size: Unlimited 00:27:08.694 Max Number of Namespaces: 1024 00:27:08.694 Max Number of I/O Queues: 128 00:27:08.694 NVMe Specification Version (VS): 1.3 00:27:08.694 NVMe Specification Version (Identify): 1.3 00:27:08.694 Maximum Queue Entries: 1024 00:27:08.694 Contiguous Queues Required: No 00:27:08.694 Arbitration Mechanisms Supported 00:27:08.694 Weighted Round Robin: Not Supported 00:27:08.694 Vendor Specific: Not Supported 00:27:08.694 Reset Timeout: 7500 ms 00:27:08.694 Doorbell Stride: 4 bytes 00:27:08.694 NVM Subsystem Reset: Not Supported 00:27:08.694 Command Sets Supported 00:27:08.694 NVM Command Set: Supported 00:27:08.694 Boot Partition: Not Supported 00:27:08.694 Memory Page Size Minimum: 4096 bytes 00:27:08.694 Memory Page Size Maximum: 4096 bytes 00:27:08.694 Persistent Memory Region: Not Supported 00:27:08.694 Optional Asynchronous Events Supported 00:27:08.694 Namespace Attribute Notices: Supported 00:27:08.694 Firmware Activation Notices: Not Supported 00:27:08.694 ANA Change Notices: Supported 00:27:08.694 PLE Aggregate Log Change Notices: Not Supported 00:27:08.694 LBA Status Info Alert Notices: Not Supported 00:27:08.694 EGE Aggregate Log Change Notices: Not Supported 00:27:08.694 Normal NVM Subsystem Shutdown event: Not Supported 00:27:08.694 Zone Descriptor Change Notices: Not Supported 00:27:08.694 Discovery Log Change Notices: Not Supported 00:27:08.694 Controller Attributes 00:27:08.694 128-bit Host Identifier: Supported 00:27:08.694 Non-Operational Permissive Mode: Not Supported 00:27:08.694 NVM Sets: Not Supported 00:27:08.694 Read Recovery Levels: Not Supported 00:27:08.694 Endurance Groups: Not Supported 00:27:08.694 Predictable Latency Mode: Not Supported 00:27:08.695 Traffic Based Keep ALive: Supported 00:27:08.695 Namespace Granularity: Not Supported 00:27:08.695 SQ Associations: Not Supported 00:27:08.695 UUID List: Not Supported 00:27:08.695 Multi-Domain Subsystem: Not Supported 00:27:08.695 Fixed Capacity Management: Not Supported 00:27:08.695 Variable Capacity Management: Not Supported 00:27:08.695 Delete Endurance Group: Not Supported 00:27:08.695 Delete NVM Set: Not Supported 00:27:08.695 Extended LBA Formats Supported: Not Supported 00:27:08.695 Flexible Data Placement Supported: Not Supported 00:27:08.695 00:27:08.695 Controller Memory Buffer Support 00:27:08.695 ================================ 00:27:08.695 Supported: No 00:27:08.695 00:27:08.695 Persistent Memory Region Support 00:27:08.695 ================================ 00:27:08.695 Supported: No 00:27:08.695 00:27:08.695 Admin Command Set Attributes 00:27:08.695 ============================ 00:27:08.695 Security Send/Receive: Not Supported 00:27:08.695 Format NVM: Not Supported 00:27:08.695 Firmware Activate/Download: Not Supported 00:27:08.695 Namespace Management: Not Supported 00:27:08.695 Device Self-Test: Not Supported 00:27:08.695 Directives: Not Supported 00:27:08.695 NVMe-MI: Not Supported 00:27:08.695 Virtualization Management: Not Supported 00:27:08.695 Doorbell Buffer Config: Not Supported 00:27:08.695 Get LBA Status Capability: Not Supported 00:27:08.695 Command & Feature Lockdown Capability: Not Supported 00:27:08.695 Abort Command Limit: 4 00:27:08.695 Async Event Request Limit: 4 00:27:08.695 Number of Firmware Slots: N/A 00:27:08.695 Firmware Slot 1 Read-Only: N/A 00:27:08.695 Firmware Activation Without Reset: N/A 00:27:08.695 Multiple Update Detection Support: N/A 00:27:08.695 Firmware Update Granularity: No Information Provided 00:27:08.695 Per-Namespace SMART Log: Yes 00:27:08.695 Asymmetric Namespace Access Log Page: Supported 00:27:08.695 ANA Transition Time : 10 sec 00:27:08.695 00:27:08.695 Asymmetric Namespace Access Capabilities 00:27:08.695 ANA Optimized State : Supported 00:27:08.695 ANA Non-Optimized State : Supported 00:27:08.695 ANA Inaccessible State : Supported 00:27:08.695 ANA Persistent Loss State : Supported 00:27:08.695 ANA Change State : Supported 00:27:08.695 ANAGRPID is not changed : No 00:27:08.695 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:08.695 00:27:08.695 ANA Group Identifier Maximum : 128 00:27:08.695 Number of ANA Group Identifiers : 128 00:27:08.695 Max Number of Allowed Namespaces : 1024 00:27:08.695 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:08.695 Command Effects Log Page: Supported 00:27:08.695 Get Log Page Extended Data: Supported 00:27:08.695 Telemetry Log Pages: Not Supported 00:27:08.695 Persistent Event Log Pages: Not Supported 00:27:08.695 Supported Log Pages Log Page: May Support 00:27:08.695 Commands Supported & Effects Log Page: Not Supported 00:27:08.695 Feature Identifiers & Effects Log Page:May Support 00:27:08.695 NVMe-MI Commands & Effects Log Page: May Support 00:27:08.695 Data Area 4 for Telemetry Log: Not Supported 00:27:08.695 Error Log Page Entries Supported: 128 00:27:08.695 Keep Alive: Supported 00:27:08.695 Keep Alive Granularity: 1000 ms 00:27:08.695 00:27:08.695 NVM Command Set Attributes 00:27:08.695 ========================== 00:27:08.695 Submission Queue Entry Size 00:27:08.695 Max: 64 00:27:08.695 Min: 64 00:27:08.695 Completion Queue Entry Size 00:27:08.695 Max: 16 00:27:08.695 Min: 16 00:27:08.695 Number of Namespaces: 1024 00:27:08.695 Compare Command: Not Supported 00:27:08.695 Write Uncorrectable Command: Not Supported 00:27:08.695 Dataset Management Command: Supported 00:27:08.695 Write Zeroes Command: Supported 00:27:08.695 Set Features Save Field: Not Supported 00:27:08.695 Reservations: Not Supported 00:27:08.695 Timestamp: Not Supported 00:27:08.695 Copy: Not Supported 00:27:08.695 Volatile Write Cache: Present 00:27:08.695 Atomic Write Unit (Normal): 1 00:27:08.695 Atomic Write Unit (PFail): 1 00:27:08.695 Atomic Compare & Write Unit: 1 00:27:08.695 Fused Compare & Write: Not Supported 00:27:08.695 Scatter-Gather List 00:27:08.695 SGL Command Set: Supported 00:27:08.695 SGL Keyed: Not Supported 00:27:08.695 SGL Bit Bucket Descriptor: Not Supported 00:27:08.695 SGL Metadata Pointer: Not Supported 00:27:08.695 Oversized SGL: Not Supported 00:27:08.695 SGL Metadata Address: Not Supported 00:27:08.695 SGL Offset: Supported 00:27:08.695 Transport SGL Data Block: Not Supported 00:27:08.695 Replay Protected Memory Block: Not Supported 00:27:08.695 00:27:08.695 Firmware Slot Information 00:27:08.695 ========================= 00:27:08.695 Active slot: 0 00:27:08.695 00:27:08.695 Asymmetric Namespace Access 00:27:08.695 =========================== 00:27:08.695 Change Count : 0 00:27:08.695 Number of ANA Group Descriptors : 1 00:27:08.695 ANA Group Descriptor : 0 00:27:08.695 ANA Group ID : 1 00:27:08.695 Number of NSID Values : 1 00:27:08.695 Change Count : 0 00:27:08.695 ANA State : 1 00:27:08.695 Namespace Identifier : 1 00:27:08.695 00:27:08.695 Commands Supported and Effects 00:27:08.695 ============================== 00:27:08.695 Admin Commands 00:27:08.695 -------------- 00:27:08.695 Get Log Page (02h): Supported 00:27:08.695 Identify (06h): Supported 00:27:08.695 Abort (08h): Supported 00:27:08.695 Set Features (09h): Supported 00:27:08.695 Get Features (0Ah): Supported 00:27:08.695 Asynchronous Event Request (0Ch): Supported 00:27:08.695 Keep Alive (18h): Supported 00:27:08.695 I/O Commands 00:27:08.695 ------------ 00:27:08.695 Flush (00h): Supported 00:27:08.695 Write (01h): Supported LBA-Change 00:27:08.695 Read (02h): Supported 00:27:08.695 Write Zeroes (08h): Supported LBA-Change 00:27:08.695 Dataset Management (09h): Supported 00:27:08.695 00:27:08.695 Error Log 00:27:08.695 ========= 00:27:08.695 Entry: 0 00:27:08.695 Error Count: 0x3 00:27:08.695 Submission Queue Id: 0x0 00:27:08.695 Command Id: 0x5 00:27:08.695 Phase Bit: 0 00:27:08.695 Status Code: 0x2 00:27:08.695 Status Code Type: 0x0 00:27:08.695 Do Not Retry: 1 00:27:08.695 Error Location: 0x28 00:27:08.695 LBA: 0x0 00:27:08.695 Namespace: 0x0 00:27:08.695 Vendor Log Page: 0x0 00:27:08.695 ----------- 00:27:08.695 Entry: 1 00:27:08.695 Error Count: 0x2 00:27:08.695 Submission Queue Id: 0x0 00:27:08.695 Command Id: 0x5 00:27:08.695 Phase Bit: 0 00:27:08.695 Status Code: 0x2 00:27:08.695 Status Code Type: 0x0 00:27:08.695 Do Not Retry: 1 00:27:08.695 Error Location: 0x28 00:27:08.695 LBA: 0x0 00:27:08.695 Namespace: 0x0 00:27:08.695 Vendor Log Page: 0x0 00:27:08.695 ----------- 00:27:08.695 Entry: 2 00:27:08.695 Error Count: 0x1 00:27:08.695 Submission Queue Id: 0x0 00:27:08.695 Command Id: 0x4 00:27:08.695 Phase Bit: 0 00:27:08.695 Status Code: 0x2 00:27:08.695 Status Code Type: 0x0 00:27:08.695 Do Not Retry: 1 00:27:08.695 Error Location: 0x28 00:27:08.695 LBA: 0x0 00:27:08.695 Namespace: 0x0 00:27:08.695 Vendor Log Page: 0x0 00:27:08.695 00:27:08.695 Number of Queues 00:27:08.695 ================ 00:27:08.695 Number of I/O Submission Queues: 128 00:27:08.695 Number of I/O Completion Queues: 128 00:27:08.695 00:27:08.695 ZNS Specific Controller Data 00:27:08.695 ============================ 00:27:08.695 Zone Append Size Limit: 0 00:27:08.695 00:27:08.695 00:27:08.695 Active Namespaces 00:27:08.695 ================= 00:27:08.695 get_feature(0x05) failed 00:27:08.695 Namespace ID:1 00:27:08.695 Command Set Identifier: NVM (00h) 00:27:08.695 Deallocate: Supported 00:27:08.695 Deallocated/Unwritten Error: Not Supported 00:27:08.695 Deallocated Read Value: Unknown 00:27:08.695 Deallocate in Write Zeroes: Not Supported 00:27:08.695 Deallocated Guard Field: 0xFFFF 00:27:08.695 Flush: Supported 00:27:08.695 Reservation: Not Supported 00:27:08.695 Namespace Sharing Capabilities: Multiple Controllers 00:27:08.695 Size (in LBAs): 3750748848 (1788GiB) 00:27:08.695 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:08.695 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:08.695 UUID: e34970e9-d15d-4409-a711-5c5ac86768c9 00:27:08.695 Thin Provisioning: Not Supported 00:27:08.695 Per-NS Atomic Units: Yes 00:27:08.695 Atomic Write Unit (Normal): 8 00:27:08.695 Atomic Write Unit (PFail): 8 00:27:08.696 Preferred Write Granularity: 8 00:27:08.696 Atomic Compare & Write Unit: 8 00:27:08.696 Atomic Boundary Size (Normal): 0 00:27:08.696 Atomic Boundary Size (PFail): 0 00:27:08.696 Atomic Boundary Offset: 0 00:27:08.696 NGUID/EUI64 Never Reused: No 00:27:08.696 ANA group ID: 1 00:27:08.696 Namespace Write Protected: No 00:27:08.696 Number of LBA Formats: 1 00:27:08.696 Current LBA Format: LBA Format #00 00:27:08.696 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:08.696 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:08.696 rmmod nvme_tcp 00:27:08.696 rmmod nvme_fabrics 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.696 07:41:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.611 07:41:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:10.611 07:41:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:10.611 07:41:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:10.611 07:41:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:10.611 07:41:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.611 07:41:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:10.872 07:41:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:10.872 07:41:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.872 07:41:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:10.872 07:41:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:10.872 07:41:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:15.082 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:15.082 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:15.082 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:15.082 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:15.082 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:15.082 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:15.082 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:15.082 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:15.082 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:15.082 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:15.082 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:15.082 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:15.082 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:15.082 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:15.082 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:15.082 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:15.082 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:15.082 00:27:15.082 real 0m20.085s 00:27:15.082 user 0m5.404s 00:27:15.082 sys 0m11.616s 00:27:15.082 07:41:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:15.082 07:41:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:15.082 ************************************ 00:27:15.082 END TEST nvmf_identify_kernel_target 00:27:15.082 ************************************ 00:27:15.082 07:41:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:15.082 07:41:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:15.082 07:41:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:15.082 07:41:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.082 ************************************ 00:27:15.082 START TEST nvmf_auth_host 00:27:15.082 ************************************ 00:27:15.082 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:15.082 * Looking for test storage... 00:27:15.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:15.082 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:15.082 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:15.082 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:15.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.346 --rc genhtml_branch_coverage=1 00:27:15.346 --rc genhtml_function_coverage=1 00:27:15.346 --rc genhtml_legend=1 00:27:15.346 --rc geninfo_all_blocks=1 00:27:15.346 --rc geninfo_unexecuted_blocks=1 00:27:15.346 00:27:15.346 ' 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:15.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.346 --rc genhtml_branch_coverage=1 00:27:15.346 --rc genhtml_function_coverage=1 00:27:15.346 --rc genhtml_legend=1 00:27:15.346 --rc geninfo_all_blocks=1 00:27:15.346 --rc geninfo_unexecuted_blocks=1 00:27:15.346 00:27:15.346 ' 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:15.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.346 --rc genhtml_branch_coverage=1 00:27:15.346 --rc genhtml_function_coverage=1 00:27:15.346 --rc genhtml_legend=1 00:27:15.346 --rc geninfo_all_blocks=1 00:27:15.346 --rc geninfo_unexecuted_blocks=1 00:27:15.346 00:27:15.346 ' 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:15.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.346 --rc genhtml_branch_coverage=1 00:27:15.346 --rc genhtml_function_coverage=1 00:27:15.346 --rc genhtml_legend=1 00:27:15.346 --rc geninfo_all_blocks=1 00:27:15.346 --rc geninfo_unexecuted_blocks=1 00:27:15.346 00:27:15.346 ' 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:15.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:15.346 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:15.347 07:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:23.494 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:23.494 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:23.494 Found net devices under 0000:31:00.0: cvl_0_0 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:23.494 Found net devices under 0000:31:00.1: cvl_0_1 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:23.494 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:23.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:27:23.495 00:27:23.495 --- 10.0.0.2 ping statistics --- 00:27:23.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.495 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:23.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:27:23.495 00:27:23.495 --- 10.0.0.1 ping statistics --- 00:27:23.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.495 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3553624 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3553624 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3553624 ']' 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:23.495 07:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.756 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:23.756 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:23.756 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:23.756 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:23.756 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.756 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:23.756 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:23.756 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:23.756 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:23.756 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.756 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:23.756 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:23.756 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:23.756 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:23.756 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5c89531e01a18e85532f947474f0b6f0 00:27:23.756 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:23.756 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.tZa 00:27:23.757 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5c89531e01a18e85532f947474f0b6f0 0 00:27:23.757 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5c89531e01a18e85532f947474f0b6f0 0 00:27:23.757 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:23.757 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:23.757 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5c89531e01a18e85532f947474f0b6f0 00:27:23.757 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:23.757 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.tZa 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.tZa 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.tZa 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=08c1326b91f64e2c45c15b9d08d6435f73b28d58cfcb441b8d8a1cd49129b29b 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Xqa 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 08c1326b91f64e2c45c15b9d08d6435f73b28d58cfcb441b8d8a1cd49129b29b 3 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 08c1326b91f64e2c45c15b9d08d6435f73b28d58cfcb441b8d8a1cd49129b29b 3 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=08c1326b91f64e2c45c15b9d08d6435f73b28d58cfcb441b8d8a1cd49129b29b 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:24.018 07:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Xqa 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Xqa 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Xqa 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5b494ca1d89c552fea896648b49ceb578c3e224c13046a36 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.IRW 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5b494ca1d89c552fea896648b49ceb578c3e224c13046a36 0 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5b494ca1d89c552fea896648b49ceb578c3e224c13046a36 0 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5b494ca1d89c552fea896648b49ceb578c3e224c13046a36 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.IRW 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.IRW 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.IRW 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:24.018 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1e90c6edd186f40097f106c033d184aabf2b9f9347121133 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.v5Y 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1e90c6edd186f40097f106c033d184aabf2b9f9347121133 2 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1e90c6edd186f40097f106c033d184aabf2b9f9347121133 2 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1e90c6edd186f40097f106c033d184aabf2b9f9347121133 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.v5Y 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.v5Y 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.v5Y 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e101ec72476d6f2053b4c19832728686 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kRZ 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e101ec72476d6f2053b4c19832728686 1 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e101ec72476d6f2053b4c19832728686 1 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e101ec72476d6f2053b4c19832728686 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:24.019 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kRZ 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kRZ 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.kRZ 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d736d5dbd895d9e3215ae4e65d89b5f3 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.iAW 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d736d5dbd895d9e3215ae4e65d89b5f3 1 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d736d5dbd895d9e3215ae4e65d89b5f3 1 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d736d5dbd895d9e3215ae4e65d89b5f3 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.iAW 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.iAW 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.iAW 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b309720eaf1b83cf1aa4cdcf2c93fc3f55cf018156e98d9c 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.R5G 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b309720eaf1b83cf1aa4cdcf2c93fc3f55cf018156e98d9c 2 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b309720eaf1b83cf1aa4cdcf2c93fc3f55cf018156e98d9c 2 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b309720eaf1b83cf1aa4cdcf2c93fc3f55cf018156e98d9c 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:24.281 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.R5G 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.R5G 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.R5G 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=21bc3333f3dff9aee50e45e32eaa584e 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4XY 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 21bc3333f3dff9aee50e45e32eaa584e 0 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 21bc3333f3dff9aee50e45e32eaa584e 0 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=21bc3333f3dff9aee50e45e32eaa584e 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4XY 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4XY 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.4XY 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5e31a8c1781ba79b53c7d92d6b75a326cde4f91f952a2143cd601ebf03ff9c6a 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.0Wp 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5e31a8c1781ba79b53c7d92d6b75a326cde4f91f952a2143cd601ebf03ff9c6a 3 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5e31a8c1781ba79b53c7d92d6b75a326cde4f91f952a2143cd601ebf03ff9c6a 3 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5e31a8c1781ba79b53c7d92d6b75a326cde4f91f952a2143cd601ebf03ff9c6a 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:24.282 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:24.544 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.0Wp 00:27:24.544 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.0Wp 00:27:24.544 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.0Wp 00:27:24.544 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:24.544 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3553624 00:27:24.544 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3553624 ']' 00:27:24.544 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.544 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:24.544 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.544 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:24.544 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.544 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:24.544 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:24.544 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.544 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tZa 00:27:24.544 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.544 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Xqa ]] 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Xqa 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.IRW 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.v5Y ]] 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.v5Y 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.kRZ 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.iAW ]] 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iAW 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.R5G 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.4XY ]] 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.4XY 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.0Wp 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.806 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.807 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:24.807 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:24.807 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:24.807 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:24.807 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:24.807 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:24.807 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:24.807 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:24.807 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:24.807 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:24.807 07:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:28.139 Waiting for block devices as requested 00:27:28.139 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:28.398 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:28.398 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:28.398 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:28.658 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:28.658 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:28.658 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:28.919 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:28.919 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:29.179 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:29.179 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:29.179 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:29.179 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:29.440 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:29.440 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:29.440 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:29.440 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:30.824 No valid GPT data, bailing 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:27:30.824 00:27:30.824 Discovery Log Number of Records 2, Generation counter 2 00:27:30.824 =====Discovery Log Entry 0====== 00:27:30.824 trtype: tcp 00:27:30.824 adrfam: ipv4 00:27:30.824 subtype: current discovery subsystem 00:27:30.824 treq: not specified, sq flow control disable supported 00:27:30.824 portid: 1 00:27:30.824 trsvcid: 4420 00:27:30.824 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:30.824 traddr: 10.0.0.1 00:27:30.824 eflags: none 00:27:30.824 sectype: none 00:27:30.824 =====Discovery Log Entry 1====== 00:27:30.824 trtype: tcp 00:27:30.824 adrfam: ipv4 00:27:30.824 subtype: nvme subsystem 00:27:30.824 treq: not specified, sq flow control disable supported 00:27:30.824 portid: 1 00:27:30.824 trsvcid: 4420 00:27:30.824 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:30.824 traddr: 10.0.0.1 00:27:30.824 eflags: none 00:27:30.824 sectype: none 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.824 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.825 nvme0n1 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.825 07:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: ]] 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.825 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.086 nvme0n1 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:31.086 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.087 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.348 nvme0n1 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: ]] 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.348 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.349 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.349 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.349 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.610 nvme0n1 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: ]] 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.610 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.611 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.873 nvme0n1 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.873 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.874 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.874 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.874 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.874 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.874 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.874 07:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.135 nvme0n1 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: ]] 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.135 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.396 nvme0n1 00:27:32.396 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.396 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.396 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.396 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.396 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.396 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.397 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.755 nvme0n1 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: ]] 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.755 nvme0n1 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.755 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.044 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.044 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.044 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.044 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.044 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.044 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.044 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:33.044 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.044 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.044 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.044 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.044 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:33.044 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:33.044 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.044 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.044 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:33.045 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: ]] 00:27:33.045 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:33.045 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:33.045 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.045 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.045 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.045 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.045 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.045 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:33.045 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.045 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.045 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.045 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.045 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.045 07:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.045 nvme0n1 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.045 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.306 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.307 nvme0n1 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.307 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: ]] 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.569 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.831 nvme0n1 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.831 07:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.093 nvme0n1 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: ]] 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:34.093 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.094 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.355 nvme0n1 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: ]] 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.355 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.616 nvme0n1 00:27:34.616 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.616 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.616 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.616 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.616 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.616 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.878 07:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.139 nvme0n1 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: ]] 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.139 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.711 nvme0n1 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.711 07:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.972 nvme0n1 00:27:35.972 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.972 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.972 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.972 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.972 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.972 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.972 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.972 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.972 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.972 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: ]] 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.233 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:36.234 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.234 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.495 nvme0n1 00:27:36.495 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.495 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.495 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.495 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.495 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.495 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.495 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: ]] 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.496 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.757 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.757 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.757 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.757 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.757 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.757 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.757 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.757 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.757 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.757 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.757 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.757 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.757 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.757 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.757 07:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.019 nvme0n1 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.019 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.591 nvme0n1 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: ]] 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.591 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.592 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.592 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.592 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.592 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.592 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.592 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.592 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.592 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.592 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.592 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.592 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.592 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.592 07:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.163 nvme0n1 00:27:38.163 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.163 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.163 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.163 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.163 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.163 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.423 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.424 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.424 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.424 07:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.996 nvme0n1 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: ]] 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.996 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.567 nvme0n1 00:27:39.567 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.567 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.568 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.568 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.568 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:39.828 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: ]] 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.829 07:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.401 nvme0n1 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.401 07:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.345 nvme0n1 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:41.345 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: ]] 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.346 nvme0n1 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.346 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.608 nvme0n1 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: ]] 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.608 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.609 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.609 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.871 nvme0n1 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: ]] 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.871 07:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.871 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.871 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.132 nvme0n1 00:27:42.132 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.132 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.132 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.132 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.132 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.132 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.132 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.132 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.132 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.132 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.132 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.132 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.133 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.394 nvme0n1 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: ]] 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.395 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.657 nvme0n1 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.657 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.658 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.658 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.658 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.658 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.658 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.658 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.658 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.658 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.658 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.658 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.658 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.658 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.658 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.920 nvme0n1 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: ]] 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.920 07:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.182 nvme0n1 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: ]] 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.182 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.444 nvme0n1 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.444 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.706 nvme0n1 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: ]] 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.706 07:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.967 nvme0n1 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:43.967 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.968 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.968 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.968 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.968 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.968 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.968 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.968 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.968 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.968 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.968 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.968 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.968 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.968 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.968 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.968 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.968 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.229 nvme0n1 00:27:44.229 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.229 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.229 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.229 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.229 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.229 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.229 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.229 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.229 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.229 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: ]] 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.491 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.752 nvme0n1 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: ]] 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.752 07:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.013 nvme0n1 00:27:45.013 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.013 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.013 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.013 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.013 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.014 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.275 nvme0n1 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: ]] 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.275 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.536 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.536 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.536 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.536 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.536 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.536 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.536 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.536 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.536 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.536 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.536 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.536 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.536 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.536 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.536 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.798 nvme0n1 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.798 07:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.369 nvme0n1 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:46.369 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: ]] 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.370 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.941 nvme0n1 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: ]] 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.941 07:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.201 nvme0n1 00:27:47.201 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.201 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.201 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.201 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.201 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.201 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.201 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.201 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.201 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.201 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.462 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.722 nvme0n1 00:27:47.722 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: ]] 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:47.723 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.982 07:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.552 nvme0n1 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.552 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.553 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.553 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.553 07:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.123 nvme0n1 00:27:49.123 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.123 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.123 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.123 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.123 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: ]] 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.385 07:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.957 nvme0n1 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: ]] 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.957 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.899 nvme0n1 00:27:50.899 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.899 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.899 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.900 07:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.473 nvme0n1 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: ]] 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.473 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.734 nvme0n1 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.734 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.735 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:51.735 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.735 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.995 nvme0n1 00:27:51.995 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.995 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.995 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.995 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.995 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.995 07:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: ]] 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:51.995 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.996 nvme0n1 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.996 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: ]] 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.257 nvme0n1 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.257 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.519 nvme0n1 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: ]] 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.519 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.781 nvme0n1 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:52.781 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:53.042 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:53.042 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.042 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.042 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:53.042 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:27:53.042 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:53.042 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:53.042 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.042 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.042 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.042 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.042 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.042 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:53.042 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.042 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.042 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.042 07:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.042 nvme0n1 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.042 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: ]] 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.303 nvme0n1 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.303 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: ]] 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.564 nvme0n1 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.564 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.565 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.826 nvme0n1 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.826 07:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.826 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.826 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.826 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.826 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: ]] 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.087 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.088 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.088 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.088 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.088 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.088 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.088 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.088 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:54.088 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.088 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.350 nvme0n1 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.350 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.611 nvme0n1 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: ]] 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.611 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.872 nvme0n1 00:27:54.872 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.872 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.872 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.872 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.872 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.872 07:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: ]] 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.872 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.873 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.873 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.873 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.873 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.873 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.134 nvme0n1 00:27:55.134 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.134 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.134 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.134 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.134 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.134 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.394 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.655 nvme0n1 00:27:55.655 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: ]] 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.656 07:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.249 nvme0n1 00:27:56.249 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.249 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.249 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.249 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.250 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.510 nvme0n1 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: ]] 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.510 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.771 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.771 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.771 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.771 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.771 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.771 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.771 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.771 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.771 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.771 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.771 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.771 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.771 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.771 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.771 07:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.030 nvme0n1 00:27:57.030 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.030 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.030 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.030 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.030 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.030 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.030 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.030 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.030 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.030 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.030 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.030 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.030 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: ]] 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.031 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.601 nvme0n1 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.601 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.602 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.602 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.602 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.602 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.602 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.602 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.602 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.602 07:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.173 nvme0n1 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWM4OTUzMWUwMWExOGU4NTUzMmY5NDc0NzRmMGI2ZjAuUiq7: 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: ]] 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDhjMTMyNmI5MWY2NGUyYzQ1YzE1YjlkMDhkNjQzNWY3M2IyOGQ1OGNmY2I0NDFiOGQ4YTFjZDQ5MTI5YjI5YmXXuLk=: 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.173 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.744 nvme0n1 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.744 07:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.686 nvme0n1 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: ]] 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.686 07:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.258 nvme0n1 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjMwOTcyMGVhZjFiODNjZjFhYTRjZGNmMmM5M2ZjM2Y1NWNmMDE4MTU2ZTk4ZDlj4FYXww==: 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: ]] 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjFiYzMzMzNmM2RmZjlhZWU1MGU0NWUzMmVhYTU4NGV/6WMw: 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.258 07:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.830 nvme0n1 00:28:00.830 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.830 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.830 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.830 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.830 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWUzMWE4YzE3ODFiYTc5YjUzYzdkOTJkNmI3NWEzMjZjZGU0ZjkxZjk1MmEyMTQzY2Q2MDFlYmYwM2ZmOWM2Yb4gEKU=: 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.091 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.662 nvme0n1 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:01.662 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:01.663 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:01.663 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.663 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:01.663 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.663 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:01.663 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.663 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.924 request: 00:28:01.924 { 00:28:01.925 "name": "nvme0", 00:28:01.925 "trtype": "tcp", 00:28:01.925 "traddr": "10.0.0.1", 00:28:01.925 "adrfam": "ipv4", 00:28:01.925 "trsvcid": "4420", 00:28:01.925 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:01.925 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:01.925 "prchk_reftag": false, 00:28:01.925 "prchk_guard": false, 00:28:01.925 "hdgst": false, 00:28:01.925 "ddgst": false, 00:28:01.925 "allow_unrecognized_csi": false, 00:28:01.925 "method": "bdev_nvme_attach_controller", 00:28:01.925 "req_id": 1 00:28:01.925 } 00:28:01.925 Got JSON-RPC error response 00:28:01.925 response: 00:28:01.925 { 00:28:01.925 "code": -5, 00:28:01.925 "message": "Input/output error" 00:28:01.925 } 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.925 07:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.925 request: 00:28:01.925 { 00:28:01.925 "name": "nvme0", 00:28:01.925 "trtype": "tcp", 00:28:01.925 "traddr": "10.0.0.1", 00:28:01.925 "adrfam": "ipv4", 00:28:01.925 "trsvcid": "4420", 00:28:01.925 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:01.925 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:01.925 "prchk_reftag": false, 00:28:01.925 "prchk_guard": false, 00:28:01.925 "hdgst": false, 00:28:01.925 "ddgst": false, 00:28:01.925 "dhchap_key": "key2", 00:28:01.925 "allow_unrecognized_csi": false, 00:28:01.925 "method": "bdev_nvme_attach_controller", 00:28:01.925 "req_id": 1 00:28:01.925 } 00:28:01.925 Got JSON-RPC error response 00:28:01.925 response: 00:28:01.925 { 00:28:01.925 "code": -5, 00:28:01.925 "message": "Input/output error" 00:28:01.925 } 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.925 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.187 request: 00:28:02.187 { 00:28:02.187 "name": "nvme0", 00:28:02.187 "trtype": "tcp", 00:28:02.187 "traddr": "10.0.0.1", 00:28:02.187 "adrfam": "ipv4", 00:28:02.187 "trsvcid": "4420", 00:28:02.187 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:02.187 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:02.187 "prchk_reftag": false, 00:28:02.187 "prchk_guard": false, 00:28:02.187 "hdgst": false, 00:28:02.187 "ddgst": false, 00:28:02.187 "dhchap_key": "key1", 00:28:02.187 "dhchap_ctrlr_key": "ckey2", 00:28:02.187 "allow_unrecognized_csi": false, 00:28:02.187 "method": "bdev_nvme_attach_controller", 00:28:02.187 "req_id": 1 00:28:02.187 } 00:28:02.187 Got JSON-RPC error response 00:28:02.187 response: 00:28:02.187 { 00:28:02.187 "code": -5, 00:28:02.187 "message": "Input/output error" 00:28:02.187 } 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.187 nvme0n1 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: ]] 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.187 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.448 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.448 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:02.448 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:02.448 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:02.448 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.449 request: 00:28:02.449 { 00:28:02.449 "name": "nvme0", 00:28:02.449 "dhchap_key": "key1", 00:28:02.449 "dhchap_ctrlr_key": "ckey2", 00:28:02.449 "method": "bdev_nvme_set_keys", 00:28:02.449 "req_id": 1 00:28:02.449 } 00:28:02.449 Got JSON-RPC error response 00:28:02.449 response: 00:28:02.449 { 00:28:02.449 "code": -13, 00:28:02.449 "message": "Permission denied" 00:28:02.449 } 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:02.449 07:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:03.392 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.392 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:03.392 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.392 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.392 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.392 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:03.392 07:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0OTRjYTFkODljNTUyZmVhODk2NjQ4YjQ5Y2ViNTc4YzNlMjI0YzEzMDQ2YTM2BzyHYw==: 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: ]] 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWU5MGM2ZWRkMTg2ZjQwMDk3ZjEwNmMwMzNkMTg0YWFiZjJiOWY5MzQ3MTIxMTMzKpaPPQ==: 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.780 nvme0n1 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTEwMWVjNzI0NzZkNmYyMDUzYjRjMTk4MzI3Mjg2ODYMqQHv: 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: ]] 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDczNmQ1ZGJkODk1ZDllMzIxNWFlNGU2NWQ4OWI1ZjNqfwtD: 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.780 request: 00:28:04.780 { 00:28:04.780 "name": "nvme0", 00:28:04.780 "dhchap_key": "key2", 00:28:04.780 "dhchap_ctrlr_key": "ckey1", 00:28:04.780 "method": "bdev_nvme_set_keys", 00:28:04.780 "req_id": 1 00:28:04.780 } 00:28:04.780 Got JSON-RPC error response 00:28:04.780 response: 00:28:04.780 { 00:28:04.780 "code": -13, 00:28:04.780 "message": "Permission denied" 00:28:04.780 } 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:04.780 07:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:05.724 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.724 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:05.724 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.724 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.724 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.986 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:05.986 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:05.986 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:05.986 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:05.986 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:05.986 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:05.986 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:05.986 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:05.986 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:05.986 07:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:05.986 rmmod nvme_tcp 00:28:05.986 rmmod nvme_fabrics 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3553624 ']' 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3553624 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 3553624 ']' 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 3553624 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3553624 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3553624' 00:28:05.986 killing process with pid 3553624 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 3553624 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 3553624 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.986 07:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.535 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:08.535 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:08.535 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:08.535 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:08.535 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:08.535 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:08.535 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:08.535 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:08.535 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:08.535 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:08.535 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:08.535 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:08.535 07:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:11.852 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:11.852 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:11.852 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:11.852 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:11.852 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:11.852 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:11.852 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:11.852 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:11.852 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:11.852 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:11.852 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:11.852 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:11.852 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:12.113 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:12.113 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:12.113 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:12.113 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:12.374 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.tZa /tmp/spdk.key-null.IRW /tmp/spdk.key-sha256.kRZ /tmp/spdk.key-sha384.R5G /tmp/spdk.key-sha512.0Wp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:12.374 07:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:16.581 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:16.581 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:16.581 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:16.581 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:16.581 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:16.581 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:16.581 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:16.581 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:16.581 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:16.581 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:16.581 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:16.581 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:16.581 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:16.581 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:16.581 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:16.581 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:16.581 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:16.581 00:28:16.581 real 1m1.248s 00:28:16.581 user 0m54.822s 00:28:16.581 sys 0m16.333s 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.581 ************************************ 00:28:16.581 END TEST nvmf_auth_host 00:28:16.581 ************************************ 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.581 ************************************ 00:28:16.581 START TEST nvmf_digest 00:28:16.581 ************************************ 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:16.581 * Looking for test storage... 00:28:16.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:16.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.581 --rc genhtml_branch_coverage=1 00:28:16.581 --rc genhtml_function_coverage=1 00:28:16.581 --rc genhtml_legend=1 00:28:16.581 --rc geninfo_all_blocks=1 00:28:16.581 --rc geninfo_unexecuted_blocks=1 00:28:16.581 00:28:16.581 ' 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:16.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.581 --rc genhtml_branch_coverage=1 00:28:16.581 --rc genhtml_function_coverage=1 00:28:16.581 --rc genhtml_legend=1 00:28:16.581 --rc geninfo_all_blocks=1 00:28:16.581 --rc geninfo_unexecuted_blocks=1 00:28:16.581 00:28:16.581 ' 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:16.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.581 --rc genhtml_branch_coverage=1 00:28:16.581 --rc genhtml_function_coverage=1 00:28:16.581 --rc genhtml_legend=1 00:28:16.581 --rc geninfo_all_blocks=1 00:28:16.581 --rc geninfo_unexecuted_blocks=1 00:28:16.581 00:28:16.581 ' 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:16.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.581 --rc genhtml_branch_coverage=1 00:28:16.581 --rc genhtml_function_coverage=1 00:28:16.581 --rc genhtml_legend=1 00:28:16.581 --rc geninfo_all_blocks=1 00:28:16.581 --rc geninfo_unexecuted_blocks=1 00:28:16.581 00:28:16.581 ' 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.581 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:16.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:16.582 07:42:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:24.722 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:24.723 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:24.723 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:24.723 Found net devices under 0000:31:00.0: cvl_0_0 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:24.723 Found net devices under 0000:31:00.1: cvl_0_1 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.723 07:42:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:24.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:28:24.723 00:28:24.723 --- 10.0.0.2 ping statistics --- 00:28:24.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.723 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:28:24.723 00:28:24.723 --- 10.0.0.1 ping statistics --- 00:28:24.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.723 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:24.723 ************************************ 00:28:24.723 START TEST nvmf_digest_clean 00:28:24.723 ************************************ 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.723 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3571377 00:28:24.724 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3571377 00:28:24.724 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3571377 ']' 00:28:24.724 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:24.724 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.724 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:24.724 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.724 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:24.724 07:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.724 [2024-11-20 07:42:42.435346] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:28:24.724 [2024-11-20 07:42:42.435405] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.724 [2024-11-20 07:42:42.535926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.724 [2024-11-20 07:42:42.586738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.724 [2024-11-20 07:42:42.586795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.724 [2024-11-20 07:42:42.586805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.724 [2024-11-20 07:42:42.586812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.724 [2024-11-20 07:42:42.586818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.724 [2024-11-20 07:42:42.587617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:25.295 null0 00:28:25.295 [2024-11-20 07:42:43.387755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.295 [2024-11-20 07:42:43.412042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3571473 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3571473 /var/tmp/bperf.sock 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3571473 ']' 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:25.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:25.295 07:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:25.295 [2024-11-20 07:42:43.478974] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:28:25.295 [2024-11-20 07:42:43.479059] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3571473 ] 00:28:25.556 [2024-11-20 07:42:43.573351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.556 [2024-11-20 07:42:43.626341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.128 07:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:26.128 07:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:26.128 07:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:26.128 07:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:26.128 07:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:26.388 07:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.388 07:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.960 nvme0n1 00:28:26.960 07:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:26.960 07:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:26.960 Running I/O for 2 seconds... 00:28:28.842 18837.00 IOPS, 73.58 MiB/s [2024-11-20T06:42:47.052Z] 20471.50 IOPS, 79.97 MiB/s 00:28:28.842 Latency(us) 00:28:28.842 [2024-11-20T06:42:47.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.842 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:28.842 nvme0n1 : 2.00 20498.87 80.07 0.00 0.00 6238.30 2252.80 22391.47 00:28:28.842 [2024-11-20T06:42:47.052Z] =================================================================================================================== 00:28:28.842 [2024-11-20T06:42:47.052Z] Total : 20498.87 80.07 0.00 0.00 6238.30 2252.80 22391.47 00:28:28.842 { 00:28:28.842 "results": [ 00:28:28.842 { 00:28:28.842 "job": "nvme0n1", 00:28:28.842 "core_mask": "0x2", 00:28:28.842 "workload": "randread", 00:28:28.842 "status": "finished", 00:28:28.842 "queue_depth": 128, 00:28:28.842 "io_size": 4096, 00:28:28.842 "runtime": 2.003574, 00:28:28.842 "iops": 20498.868521951274, 00:28:28.842 "mibps": 80.07370516387216, 00:28:28.842 "io_failed": 0, 00:28:28.842 "io_timeout": 0, 00:28:28.842 "avg_latency_us": 6238.3031089251945, 00:28:28.842 "min_latency_us": 2252.8, 00:28:28.842 "max_latency_us": 22391.466666666667 00:28:28.842 } 00:28:28.842 ], 00:28:28.842 "core_count": 1 00:28:28.842 } 00:28:28.842 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:28.842 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:28.842 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:28.842 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:28.842 | select(.opcode=="crc32c") 00:28:28.842 | "\(.module_name) \(.executed)"' 00:28:28.842 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:29.104 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:29.104 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:29.104 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:29.104 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:29.104 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3571473 00:28:29.104 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3571473 ']' 00:28:29.104 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3571473 00:28:29.104 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:29.104 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:29.104 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3571473 00:28:29.104 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:29.104 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:29.104 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3571473' 00:28:29.104 killing process with pid 3571473 00:28:29.104 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3571473 00:28:29.104 Received shutdown signal, test time was about 2.000000 seconds 00:28:29.104 00:28:29.104 Latency(us) 00:28:29.104 [2024-11-20T06:42:47.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.104 [2024-11-20T06:42:47.314Z] =================================================================================================================== 00:28:29.104 [2024-11-20T06:42:47.314Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:29.104 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3571473 00:28:29.365 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:29.365 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:29.365 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:29.365 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:29.365 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:29.365 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:29.365 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:29.365 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3572231 00:28:29.365 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3572231 /var/tmp/bperf.sock 00:28:29.365 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3572231 ']' 00:28:29.365 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:29.365 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:29.365 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:29.365 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:29.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:29.365 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:29.365 07:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:29.365 [2024-11-20 07:42:47.430993] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:28:29.365 [2024-11-20 07:42:47.431049] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3572231 ] 00:28:29.365 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:29.365 Zero copy mechanism will not be used. 00:28:29.365 [2024-11-20 07:42:47.515340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.365 [2024-11-20 07:42:47.544917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.305 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:30.305 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:30.305 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:30.305 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:30.305 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:30.305 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.305 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.562 nvme0n1 00:28:30.562 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:30.562 07:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:30.820 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:30.820 Zero copy mechanism will not be used. 00:28:30.820 Running I/O for 2 seconds... 00:28:32.698 4019.00 IOPS, 502.38 MiB/s [2024-11-20T06:42:50.908Z] 3918.00 IOPS, 489.75 MiB/s 00:28:32.698 Latency(us) 00:28:32.698 [2024-11-20T06:42:50.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.698 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:32.698 nvme0n1 : 2.00 3922.34 490.29 0.00 0.00 4076.43 535.89 8956.59 00:28:32.698 [2024-11-20T06:42:50.908Z] =================================================================================================================== 00:28:32.698 [2024-11-20T06:42:50.908Z] Total : 3922.34 490.29 0.00 0.00 4076.43 535.89 8956.59 00:28:32.698 { 00:28:32.698 "results": [ 00:28:32.698 { 00:28:32.698 "job": "nvme0n1", 00:28:32.698 "core_mask": "0x2", 00:28:32.698 "workload": "randread", 00:28:32.698 "status": "finished", 00:28:32.698 "queue_depth": 16, 00:28:32.698 "io_size": 131072, 00:28:32.698 "runtime": 2.001864, 00:28:32.698 "iops": 3922.3443750424603, 00:28:32.698 "mibps": 490.29304688030754, 00:28:32.698 "io_failed": 0, 00:28:32.698 "io_timeout": 0, 00:28:32.698 "avg_latency_us": 4076.434627271183, 00:28:32.698 "min_latency_us": 535.8933333333333, 00:28:32.698 "max_latency_us": 8956.586666666666 00:28:32.698 } 00:28:32.698 ], 00:28:32.699 "core_count": 1 00:28:32.699 } 00:28:32.699 07:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:32.699 07:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:32.699 07:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:32.699 07:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:32.699 | select(.opcode=="crc32c") 00:28:32.699 | "\(.module_name) \(.executed)"' 00:28:32.699 07:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:32.959 07:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:32.959 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:32.959 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:32.959 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:32.959 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3572231 00:28:32.959 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3572231 ']' 00:28:32.959 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3572231 00:28:32.959 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:32.959 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:32.959 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3572231 00:28:32.959 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:32.959 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:32.959 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3572231' 00:28:32.959 killing process with pid 3572231 00:28:32.959 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3572231 00:28:32.959 Received shutdown signal, test time was about 2.000000 seconds 00:28:32.959 00:28:32.959 Latency(us) 00:28:32.959 [2024-11-20T06:42:51.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.959 [2024-11-20T06:42:51.169Z] =================================================================================================================== 00:28:32.959 [2024-11-20T06:42:51.169Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:32.959 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3572231 00:28:33.220 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:33.220 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:33.220 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:33.220 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:33.220 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:33.220 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:33.220 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:33.220 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3573034 00:28:33.220 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3573034 /var/tmp/bperf.sock 00:28:33.220 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3573034 ']' 00:28:33.220 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:33.220 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:33.220 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:33.220 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:33.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:33.220 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:33.220 07:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:33.220 [2024-11-20 07:42:51.226448] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:28:33.220 [2024-11-20 07:42:51.226503] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3573034 ] 00:28:33.220 [2024-11-20 07:42:51.309275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.220 [2024-11-20 07:42:51.338673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.160 07:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:34.160 07:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:34.160 07:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:34.160 07:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:34.160 07:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:34.160 07:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.160 07:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.420 nvme0n1 00:28:34.420 07:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:34.420 07:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:34.420 Running I/O for 2 seconds... 00:28:36.498 29552.00 IOPS, 115.44 MiB/s [2024-11-20T06:42:54.708Z] 29324.00 IOPS, 114.55 MiB/s 00:28:36.498 Latency(us) 00:28:36.498 [2024-11-20T06:42:54.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.498 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:36.498 nvme0n1 : 2.01 29323.77 114.55 0.00 0.00 4357.61 1713.49 14964.05 00:28:36.498 [2024-11-20T06:42:54.708Z] =================================================================================================================== 00:28:36.498 [2024-11-20T06:42:54.708Z] Total : 29323.77 114.55 0.00 0.00 4357.61 1713.49 14964.05 00:28:36.498 { 00:28:36.498 "results": [ 00:28:36.498 { 00:28:36.498 "job": "nvme0n1", 00:28:36.498 "core_mask": "0x2", 00:28:36.498 "workload": "randwrite", 00:28:36.498 "status": "finished", 00:28:36.498 "queue_depth": 128, 00:28:36.498 "io_size": 4096, 00:28:36.498 "runtime": 2.005472, 00:28:36.498 "iops": 29323.770164829028, 00:28:36.498 "mibps": 114.54597720636339, 00:28:36.498 "io_failed": 0, 00:28:36.498 "io_timeout": 0, 00:28:36.498 "avg_latency_us": 4357.607204461978, 00:28:36.498 "min_latency_us": 1713.4933333333333, 00:28:36.498 "max_latency_us": 14964.053333333333 00:28:36.498 } 00:28:36.498 ], 00:28:36.498 "core_count": 1 00:28:36.498 } 00:28:36.498 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:36.498 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:36.498 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:36.499 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:36.499 | select(.opcode=="crc32c") 00:28:36.499 | "\(.module_name) \(.executed)"' 00:28:36.499 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:36.791 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:36.791 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:36.791 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:36.791 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:36.791 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3573034 00:28:36.791 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3573034 ']' 00:28:36.791 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3573034 00:28:36.791 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:36.791 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:36.791 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3573034 00:28:36.791 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:36.791 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:36.791 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3573034' 00:28:36.791 killing process with pid 3573034 00:28:36.791 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3573034 00:28:36.791 Received shutdown signal, test time was about 2.000000 seconds 00:28:36.791 00:28:36.791 Latency(us) 00:28:36.791 [2024-11-20T06:42:55.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.791 [2024-11-20T06:42:55.001Z] =================================================================================================================== 00:28:36.791 [2024-11-20T06:42:55.001Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:36.791 07:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3573034 00:28:37.052 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:37.052 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:37.052 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:37.052 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:37.052 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:37.052 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:37.052 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:37.052 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3573845 00:28:37.052 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3573845 /var/tmp/bperf.sock 00:28:37.052 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3573845 ']' 00:28:37.052 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:37.052 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:37.052 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:37.052 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:37.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:37.052 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:37.052 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:37.052 [2024-11-20 07:42:55.070067] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:28:37.052 [2024-11-20 07:42:55.070125] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3573845 ] 00:28:37.052 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:37.052 Zero copy mechanism will not be used. 00:28:37.052 [2024-11-20 07:42:55.155356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.052 [2024-11-20 07:42:55.184874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.994 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:37.994 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:37.994 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:37.994 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:37.994 07:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:37.994 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.994 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:38.566 nvme0n1 00:28:38.566 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:38.566 07:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:38.566 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:38.566 Zero copy mechanism will not be used. 00:28:38.566 Running I/O for 2 seconds... 00:28:40.452 5952.00 IOPS, 744.00 MiB/s [2024-11-20T06:42:58.662Z] 5549.50 IOPS, 693.69 MiB/s 00:28:40.452 Latency(us) 00:28:40.452 [2024-11-20T06:42:58.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.452 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:40.452 nvme0n1 : 2.01 5542.25 692.78 0.00 0.00 2881.02 1174.19 8574.29 00:28:40.452 [2024-11-20T06:42:58.662Z] =================================================================================================================== 00:28:40.452 [2024-11-20T06:42:58.662Z] Total : 5542.25 692.78 0.00 0.00 2881.02 1174.19 8574.29 00:28:40.452 { 00:28:40.452 "results": [ 00:28:40.452 { 00:28:40.452 "job": "nvme0n1", 00:28:40.452 "core_mask": "0x2", 00:28:40.452 "workload": "randwrite", 00:28:40.452 "status": "finished", 00:28:40.452 "queue_depth": 16, 00:28:40.453 "io_size": 131072, 00:28:40.453 "runtime": 2.006224, 00:28:40.453 "iops": 5542.252510188294, 00:28:40.453 "mibps": 692.7815637735367, 00:28:40.453 "io_failed": 0, 00:28:40.453 "io_timeout": 0, 00:28:40.453 "avg_latency_us": 2881.020945528675, 00:28:40.453 "min_latency_us": 1174.1866666666667, 00:28:40.453 "max_latency_us": 8574.293333333333 00:28:40.453 } 00:28:40.453 ], 00:28:40.453 "core_count": 1 00:28:40.453 } 00:28:40.453 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:40.453 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:40.453 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:40.453 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:40.453 | select(.opcode=="crc32c") 00:28:40.453 | "\(.module_name) \(.executed)"' 00:28:40.453 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:40.713 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:40.713 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:40.713 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:40.713 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:40.713 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3573845 00:28:40.713 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3573845 ']' 00:28:40.713 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3573845 00:28:40.713 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:40.713 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:40.713 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3573845 00:28:40.713 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:40.713 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:40.713 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3573845' 00:28:40.713 killing process with pid 3573845 00:28:40.713 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3573845 00:28:40.713 Received shutdown signal, test time was about 2.000000 seconds 00:28:40.713 00:28:40.713 Latency(us) 00:28:40.713 [2024-11-20T06:42:58.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.713 [2024-11-20T06:42:58.924Z] =================================================================================================================== 00:28:40.714 [2024-11-20T06:42:58.924Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:40.714 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3573845 00:28:40.974 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3571377 00:28:40.974 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3571377 ']' 00:28:40.974 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3571377 00:28:40.974 07:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:40.974 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:40.974 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3571377 00:28:40.974 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:40.974 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:40.974 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3571377' 00:28:40.974 killing process with pid 3571377 00:28:40.974 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3571377 00:28:40.974 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3571377 00:28:40.974 00:28:40.974 real 0m16.803s 00:28:40.974 user 0m33.296s 00:28:40.974 sys 0m3.716s 00:28:40.974 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:40.974 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:40.974 ************************************ 00:28:40.974 END TEST nvmf_digest_clean 00:28:40.974 ************************************ 00:28:41.235 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:41.235 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:41.235 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:41.235 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.235 ************************************ 00:28:41.235 START TEST nvmf_digest_error 00:28:41.235 ************************************ 00:28:41.235 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:28:41.235 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:41.235 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:41.235 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:41.235 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.235 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3574557 00:28:41.235 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3574557 00:28:41.236 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:41.236 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3574557 ']' 00:28:41.236 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.236 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:41.236 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.236 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:41.236 07:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.236 [2024-11-20 07:42:59.308986] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:28:41.236 [2024-11-20 07:42:59.309034] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.236 [2024-11-20 07:42:59.404293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.236 [2024-11-20 07:42:59.434370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.236 [2024-11-20 07:42:59.434398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.236 [2024-11-20 07:42:59.434404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.236 [2024-11-20 07:42:59.434409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.236 [2024-11-20 07:42:59.434413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.236 [2024-11-20 07:42:59.434909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:42.179 [2024-11-20 07:43:00.140849] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:42.179 null0 00:28:42.179 [2024-11-20 07:43:00.219847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.179 [2024-11-20 07:43:00.244038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3574881 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3574881 /var/tmp/bperf.sock 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3574881 ']' 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:42.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:42.179 07:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:42.179 [2024-11-20 07:43:00.300806] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:28:42.179 [2024-11-20 07:43:00.300853] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3574881 ] 00:28:42.439 [2024-11-20 07:43:00.385245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.439 [2024-11-20 07:43:00.415117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.010 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:43.010 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:43.010 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:43.010 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:43.271 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:43.271 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.271 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.271 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.271 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:43.271 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:43.532 nvme0n1 00:28:43.532 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:43.532 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.532 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.532 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.532 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:43.532 07:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:43.532 Running I/O for 2 seconds... 00:28:43.532 [2024-11-20 07:43:01.698671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.532 [2024-11-20 07:43:01.698704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.532 [2024-11-20 07:43:01.698713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.532 [2024-11-20 07:43:01.708109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.532 [2024-11-20 07:43:01.708130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.532 [2024-11-20 07:43:01.708137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.532 [2024-11-20 07:43:01.718781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.532 [2024-11-20 07:43:01.718801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.532 [2024-11-20 07:43:01.718809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.532 [2024-11-20 07:43:01.727221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.532 [2024-11-20 07:43:01.727240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.532 [2024-11-20 07:43:01.727247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.532 [2024-11-20 07:43:01.736428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.532 [2024-11-20 07:43:01.736446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.532 [2024-11-20 07:43:01.736452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.793 [2024-11-20 07:43:01.745500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.793 [2024-11-20 07:43:01.745517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.793 [2024-11-20 07:43:01.745524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.793 [2024-11-20 07:43:01.755064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.793 [2024-11-20 07:43:01.755081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.793 [2024-11-20 07:43:01.755088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.793 [2024-11-20 07:43:01.763732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.763752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.763759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.772311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.772329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.772337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.781766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.781783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.781790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.790684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.790702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.790708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.799103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.799121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.799127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.807918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.807935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.807942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.817708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.817726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.817733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.825883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.825901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.825908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.834982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.835000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.835010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.844638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.844655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.844661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.852516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.852532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.852539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.861196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.861213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.861220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.870690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.870707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.870714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.879030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.879048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.879054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.888668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.888685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.888692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.899432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.899450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.899457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.908622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.908640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.908647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.916912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.916931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.916939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.926178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.926196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.926203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.934925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.934943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.934950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.943855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.943873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.943879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.953829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.953847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.953854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.964212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.964230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.964236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.973837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.973855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.973861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.983229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.983246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.983252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.794 [2024-11-20 07:43:01.991483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:43.794 [2024-11-20 07:43:01.991501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.794 [2024-11-20 07:43:01.991510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.056 [2024-11-20 07:43:02.001181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.056 [2024-11-20 07:43:02.001199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.056 [2024-11-20 07:43:02.001205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.056 [2024-11-20 07:43:02.010511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.056 [2024-11-20 07:43:02.010529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.056 [2024-11-20 07:43:02.010535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.056 [2024-11-20 07:43:02.018181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.056 [2024-11-20 07:43:02.018198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.056 [2024-11-20 07:43:02.018204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.056 [2024-11-20 07:43:02.027529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.056 [2024-11-20 07:43:02.027547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.056 [2024-11-20 07:43:02.027553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.056 [2024-11-20 07:43:02.035726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.056 [2024-11-20 07:43:02.035743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.056 [2024-11-20 07:43:02.035755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.045419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.045437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.045443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.054548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.054566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.054573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.062447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.062465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.062472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.072687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.072708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.072715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.082282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.082299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.082305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.089960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.089977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.089984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.099269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.099287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.099293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.109646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.109663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.109669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.118361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.118378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.118384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.126915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.126932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.126939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.136293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.136310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.136317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.145242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.145259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.145265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.153156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.153173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.153180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.161940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.161957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.161963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.171113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.171131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.171137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.180273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.180290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.180296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.188291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.188309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.188315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.197920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.197937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.197944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.207092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.207109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.207116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.216038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.216054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.216061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.224371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.224388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.224397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.233398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.233415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.233422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.242196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.242214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.242221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.252364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.252381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.252387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.057 [2024-11-20 07:43:02.260942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.057 [2024-11-20 07:43:02.260959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.057 [2024-11-20 07:43:02.260965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.319 [2024-11-20 07:43:02.270669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.319 [2024-11-20 07:43:02.270686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.319 [2024-11-20 07:43:02.270692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.319 [2024-11-20 07:43:02.278709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.319 [2024-11-20 07:43:02.278726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.319 [2024-11-20 07:43:02.278733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.319 [2024-11-20 07:43:02.288616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.319 [2024-11-20 07:43:02.288633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.319 [2024-11-20 07:43:02.288639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.319 [2024-11-20 07:43:02.296797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.319 [2024-11-20 07:43:02.296814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.319 [2024-11-20 07:43:02.296820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.319 [2024-11-20 07:43:02.306949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.319 [2024-11-20 07:43:02.306969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.319 [2024-11-20 07:43:02.306976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.319 [2024-11-20 07:43:02.315443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.319 [2024-11-20 07:43:02.315460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.319 [2024-11-20 07:43:02.315466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.319 [2024-11-20 07:43:02.325790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.319 [2024-11-20 07:43:02.325807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.319 [2024-11-20 07:43:02.325813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.319 [2024-11-20 07:43:02.336125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.319 [2024-11-20 07:43:02.336141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.319 [2024-11-20 07:43:02.336148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.319 [2024-11-20 07:43:02.345062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.319 [2024-11-20 07:43:02.345079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.345085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.352414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.352430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.352437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.362353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.362371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.362377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.372051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.372068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.372074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.380424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.380440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.380450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.389874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.389891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.389897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.399530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.399546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.399553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.407872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.407889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.407895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.416545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.416562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.416568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.425416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.425434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.425440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.434173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.434190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.434196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.442891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.442908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.442914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.452157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.452174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.452180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.461540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.461560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.461567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.469904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.469921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.469928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.477887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.477904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.477911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.487804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.487821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.487828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.496884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.496900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.496907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.506236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.506252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.506259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.514593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.514610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.514616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.320 [2024-11-20 07:43:02.523558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.320 [2024-11-20 07:43:02.523575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.320 [2024-11-20 07:43:02.523582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.533146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.533163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.533169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.541168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.541185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.541191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.551217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.551234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.551240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.560561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.560578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.560584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.569445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.569462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.569468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.578648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.578665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.578672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.587787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.587804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.587810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.596877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.596894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.596900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.605624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.605641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.605647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.614492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.614509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.614518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.623646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.623663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.623670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.633333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.633350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.633356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.641858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.641875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.641881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.650328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.650345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.650351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.659656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.659673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.659680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.669126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.669143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.669149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.677594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.677611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.677617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 28005.00 IOPS, 109.39 MiB/s [2024-11-20T06:43:02.792Z] [2024-11-20 07:43:02.689374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.689391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.689397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.697492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.697510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.697516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.708853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.708871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.708877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.718153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.718170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.718176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.727151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.582 [2024-11-20 07:43:02.727168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.582 [2024-11-20 07:43:02.727174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.582 [2024-11-20 07:43:02.735698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.583 [2024-11-20 07:43:02.735716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.583 [2024-11-20 07:43:02.735722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.583 [2024-11-20 07:43:02.744207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.583 [2024-11-20 07:43:02.744223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.583 [2024-11-20 07:43:02.744229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.583 [2024-11-20 07:43:02.752959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.583 [2024-11-20 07:43:02.752976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.583 [2024-11-20 07:43:02.752982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.583 [2024-11-20 07:43:02.762950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.583 [2024-11-20 07:43:02.762967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.583 [2024-11-20 07:43:02.762973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.583 [2024-11-20 07:43:02.772533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.583 [2024-11-20 07:43:02.772549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.583 [2024-11-20 07:43:02.772559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.583 [2024-11-20 07:43:02.784571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.583 [2024-11-20 07:43:02.784587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.583 [2024-11-20 07:43:02.784594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.844 [2024-11-20 07:43:02.792703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.844 [2024-11-20 07:43:02.792720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.792726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.803925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.803942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.803948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.813531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.813548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.813555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.822361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.822378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.822384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.829900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.829916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.829923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.840193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.840210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.840216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.849833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.849849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.849856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.860217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.860237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.860243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.868600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.868617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.868623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.877541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.877558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.877564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.887047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.887063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.887070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.894981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.894998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.895004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.904526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.904543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.904549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.912407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.912424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.912430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.923085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.923102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.923108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.933622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.933638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.933645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.941516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.941533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.941539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.950288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.950305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.950311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.959544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.959561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.959567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.968398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.968415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.968421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.977462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.977479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.977485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.985833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.985850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.985856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:02.994754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:02.994771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:02.994778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:03.003740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:03.003760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:03.003767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:03.012326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:03.012343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:03.012352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:03.021918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:03.021935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:03.021942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.845 [2024-11-20 07:43:03.029694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.845 [2024-11-20 07:43:03.029711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.845 [2024-11-20 07:43:03.029717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.846 [2024-11-20 07:43:03.039837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.846 [2024-11-20 07:43:03.039854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.846 [2024-11-20 07:43:03.039861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.846 [2024-11-20 07:43:03.047961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:44.846 [2024-11-20 07:43:03.047977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.846 [2024-11-20 07:43:03.047984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.056495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.056512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.056519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.065347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.065364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.065370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.074156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.074172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.074179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.083907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.083924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.083931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.091127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.091147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.091154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.100761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.100780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.100786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.109800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.109817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.109824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.118953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.118970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.118976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.128352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.128369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.128376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.137034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.137052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.137058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.145777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.145794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.145800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.154352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.154369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.154375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.165487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.165504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.165510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.173878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.173895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.173901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.185035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.185052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.185058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.194012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.194030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.194036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.201671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.201688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.201695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.211252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.211269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.211276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.220626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.220642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.220648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.229329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.229347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.229354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.238015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.238031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.238038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.247254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.247274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.247280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.108 [2024-11-20 07:43:03.255227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.108 [2024-11-20 07:43:03.255244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.108 [2024-11-20 07:43:03.255251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.109 [2024-11-20 07:43:03.264548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.109 [2024-11-20 07:43:03.264565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.109 [2024-11-20 07:43:03.264572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.109 [2024-11-20 07:43:03.273190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.109 [2024-11-20 07:43:03.273207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.109 [2024-11-20 07:43:03.273213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.109 [2024-11-20 07:43:03.281890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.109 [2024-11-20 07:43:03.281907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.109 [2024-11-20 07:43:03.281913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.109 [2024-11-20 07:43:03.291125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.109 [2024-11-20 07:43:03.291142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.109 [2024-11-20 07:43:03.291148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.109 [2024-11-20 07:43:03.299472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.109 [2024-11-20 07:43:03.299489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.109 [2024-11-20 07:43:03.299495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.109 [2024-11-20 07:43:03.308740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.109 [2024-11-20 07:43:03.308761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.109 [2024-11-20 07:43:03.308767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.371 [2024-11-20 07:43:03.317909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.371 [2024-11-20 07:43:03.317925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.371 [2024-11-20 07:43:03.317932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.371 [2024-11-20 07:43:03.326954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.371 [2024-11-20 07:43:03.326971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.371 [2024-11-20 07:43:03.326978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.371 [2024-11-20 07:43:03.336235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.371 [2024-11-20 07:43:03.336252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.371 [2024-11-20 07:43:03.336258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.371 [2024-11-20 07:43:03.345044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.371 [2024-11-20 07:43:03.345061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.371 [2024-11-20 07:43:03.345068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.371 [2024-11-20 07:43:03.354257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.371 [2024-11-20 07:43:03.354274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.371 [2024-11-20 07:43:03.354280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.371 [2024-11-20 07:43:03.363611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.371 [2024-11-20 07:43:03.363627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.371 [2024-11-20 07:43:03.363634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.371 [2024-11-20 07:43:03.371421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.371 [2024-11-20 07:43:03.371439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.371 [2024-11-20 07:43:03.371445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.371 [2024-11-20 07:43:03.380861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.371 [2024-11-20 07:43:03.380878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.371 [2024-11-20 07:43:03.380885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.371 [2024-11-20 07:43:03.389884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.371 [2024-11-20 07:43:03.389901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.371 [2024-11-20 07:43:03.389908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.371 [2024-11-20 07:43:03.399356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.371 [2024-11-20 07:43:03.399373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.371 [2024-11-20 07:43:03.399382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.371 [2024-11-20 07:43:03.406939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.371 [2024-11-20 07:43:03.406956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.371 [2024-11-20 07:43:03.406963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.371 [2024-11-20 07:43:03.416656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.371 [2024-11-20 07:43:03.416673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.371 [2024-11-20 07:43:03.416679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.371 [2024-11-20 07:43:03.426153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.371 [2024-11-20 07:43:03.426171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.371 [2024-11-20 07:43:03.426177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.371 [2024-11-20 07:43:03.434752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.371 [2024-11-20 07:43:03.434769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.372 [2024-11-20 07:43:03.434776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.372 [2024-11-20 07:43:03.443458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.372 [2024-11-20 07:43:03.443475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.372 [2024-11-20 07:43:03.443482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.372 [2024-11-20 07:43:03.452796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.372 [2024-11-20 07:43:03.452813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.372 [2024-11-20 07:43:03.452820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.372 [2024-11-20 07:43:03.462221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.372 [2024-11-20 07:43:03.462238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.372 [2024-11-20 07:43:03.462244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.372 [2024-11-20 07:43:03.471022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.372 [2024-11-20 07:43:03.471039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.372 [2024-11-20 07:43:03.471045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.372 [2024-11-20 07:43:03.479970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.372 [2024-11-20 07:43:03.479990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.372 [2024-11-20 07:43:03.479997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.372 [2024-11-20 07:43:03.488723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.372 [2024-11-20 07:43:03.488739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.372 [2024-11-20 07:43:03.488751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.372 [2024-11-20 07:43:03.497510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.372 [2024-11-20 07:43:03.497527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.372 [2024-11-20 07:43:03.497533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.372 [2024-11-20 07:43:03.506272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.372 [2024-11-20 07:43:03.506289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.372 [2024-11-20 07:43:03.506295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.372 [2024-11-20 07:43:03.515445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.372 [2024-11-20 07:43:03.515462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.372 [2024-11-20 07:43:03.515468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.372 [2024-11-20 07:43:03.524238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.372 [2024-11-20 07:43:03.524256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.372 [2024-11-20 07:43:03.524262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.372 [2024-11-20 07:43:03.532883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.372 [2024-11-20 07:43:03.532900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.372 [2024-11-20 07:43:03.532906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.372 [2024-11-20 07:43:03.542022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.372 [2024-11-20 07:43:03.542039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.372 [2024-11-20 07:43:03.542045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.372 [2024-11-20 07:43:03.550207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.372 [2024-11-20 07:43:03.550225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.372 [2024-11-20 07:43:03.550231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.372 [2024-11-20 07:43:03.561033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.372 [2024-11-20 07:43:03.561051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.372 [2024-11-20 07:43:03.561057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.372 [2024-11-20 07:43:03.570311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.372 [2024-11-20 07:43:03.570328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.372 [2024-11-20 07:43:03.570334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.634 [2024-11-20 07:43:03.577781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.634 [2024-11-20 07:43:03.577798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.634 [2024-11-20 07:43:03.577805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.634 [2024-11-20 07:43:03.588855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.634 [2024-11-20 07:43:03.588872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.634 [2024-11-20 07:43:03.588879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.634 [2024-11-20 07:43:03.600240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.634 [2024-11-20 07:43:03.600257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.634 [2024-11-20 07:43:03.600264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.634 [2024-11-20 07:43:03.612300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.634 [2024-11-20 07:43:03.612318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.634 [2024-11-20 07:43:03.612324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.634 [2024-11-20 07:43:03.622066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.634 [2024-11-20 07:43:03.622083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.634 [2024-11-20 07:43:03.622090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.634 [2024-11-20 07:43:03.630438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.634 [2024-11-20 07:43:03.630456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.634 [2024-11-20 07:43:03.630462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.634 [2024-11-20 07:43:03.639127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.634 [2024-11-20 07:43:03.639147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.634 [2024-11-20 07:43:03.639154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.634 [2024-11-20 07:43:03.648829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.634 [2024-11-20 07:43:03.648846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.634 [2024-11-20 07:43:03.648852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.634 [2024-11-20 07:43:03.657498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.634 [2024-11-20 07:43:03.657516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.634 [2024-11-20 07:43:03.657522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.634 [2024-11-20 07:43:03.665841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.634 [2024-11-20 07:43:03.665858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.634 [2024-11-20 07:43:03.665864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.634 [2024-11-20 07:43:03.674709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.634 [2024-11-20 07:43:03.674727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.634 [2024-11-20 07:43:03.674733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.634 [2024-11-20 07:43:03.684734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e031c0) 00:28:45.634 [2024-11-20 07:43:03.684756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.634 [2024-11-20 07:43:03.684762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.634 27984.50 IOPS, 109.31 MiB/s 00:28:45.634 Latency(us) 00:28:45.634 [2024-11-20T06:43:03.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.634 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:45.634 nvme0n1 : 2.00 28006.46 109.40 0.00 0.00 4565.86 2266.45 13161.81 00:28:45.634 [2024-11-20T06:43:03.844Z] =================================================================================================================== 00:28:45.634 [2024-11-20T06:43:03.844Z] Total : 28006.46 109.40 0.00 0.00 4565.86 2266.45 13161.81 00:28:45.634 { 00:28:45.634 "results": [ 00:28:45.634 { 00:28:45.634 "job": "nvme0n1", 00:28:45.634 "core_mask": "0x2", 00:28:45.634 "workload": "randread", 00:28:45.634 "status": "finished", 00:28:45.634 "queue_depth": 128, 00:28:45.634 "io_size": 4096, 00:28:45.634 "runtime": 2.003002, 00:28:45.634 "iops": 28006.46230008757, 00:28:45.634 "mibps": 109.40024335971707, 00:28:45.634 "io_failed": 0, 00:28:45.634 "io_timeout": 0, 00:28:45.634 "avg_latency_us": 4565.864436244362, 00:28:45.634 "min_latency_us": 2266.4533333333334, 00:28:45.634 "max_latency_us": 13161.813333333334 00:28:45.634 } 00:28:45.634 ], 00:28:45.634 "core_count": 1 00:28:45.634 } 00:28:45.634 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:45.634 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:45.634 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:45.634 | .driver_specific 00:28:45.634 | .nvme_error 00:28:45.634 | .status_code 00:28:45.634 | .command_transient_transport_error' 00:28:45.634 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:45.895 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:28:45.895 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3574881 00:28:45.895 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3574881 ']' 00:28:45.895 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3574881 00:28:45.895 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:45.895 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:45.895 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3574881 00:28:45.895 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:45.895 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:45.895 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3574881' 00:28:45.895 killing process with pid 3574881 00:28:45.895 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3574881 00:28:45.895 Received shutdown signal, test time was about 2.000000 seconds 00:28:45.895 00:28:45.895 Latency(us) 00:28:45.895 [2024-11-20T06:43:04.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.895 [2024-11-20T06:43:04.105Z] =================================================================================================================== 00:28:45.895 [2024-11-20T06:43:04.105Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:45.895 07:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3574881 00:28:45.895 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:45.895 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:45.895 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:45.895 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:45.895 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:45.895 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3575593 00:28:45.895 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3575593 /var/tmp/bperf.sock 00:28:45.895 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3575593 ']' 00:28:45.895 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:45.896 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:45.896 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:45.896 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:45.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:45.896 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:45.896 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:46.156 [2024-11-20 07:43:04.114233] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:28:46.156 [2024-11-20 07:43:04.114290] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3575593 ] 00:28:46.156 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:46.156 Zero copy mechanism will not be used. 00:28:46.156 [2024-11-20 07:43:04.197682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.156 [2024-11-20 07:43:04.227216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.727 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:46.727 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:46.727 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:46.727 07:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:46.987 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:46.987 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.987 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:46.987 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.987 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:46.987 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.247 nvme0n1 00:28:47.247 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:47.247 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.247 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.247 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.247 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:47.247 07:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:47.248 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:47.248 Zero copy mechanism will not be used. 00:28:47.248 Running I/O for 2 seconds... 00:28:47.248 [2024-11-20 07:43:05.452913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.248 [2024-11-20 07:43:05.452946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.248 [2024-11-20 07:43:05.452956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.464387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.464411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.464419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.473876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.473896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.473903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.479826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.479845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.479852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.482919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.482937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.482944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.493845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.493865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.493871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.504343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.504361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.504368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.515707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.515727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.515733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.520886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.520905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.520911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.531185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.531203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.531210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.540870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.540888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.540898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.552259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.552278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.552284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.562823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.562841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.562848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.572273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.572292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.572299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.580931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.580949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.580955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.591304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.591322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.591329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.601800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.601818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.601825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.612029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.612047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.612054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.621599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.621618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.621625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.626592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.626614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.626621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.634065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.634083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.634090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.644572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.644591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.644597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.656206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.509 [2024-11-20 07:43:05.656225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.509 [2024-11-20 07:43:05.656231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.509 [2024-11-20 07:43:05.661089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.510 [2024-11-20 07:43:05.661107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.510 [2024-11-20 07:43:05.661113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.510 [2024-11-20 07:43:05.671103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.510 [2024-11-20 07:43:05.671121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.510 [2024-11-20 07:43:05.671128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.510 [2024-11-20 07:43:05.678656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.510 [2024-11-20 07:43:05.678674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.510 [2024-11-20 07:43:05.678680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.510 [2024-11-20 07:43:05.687358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.510 [2024-11-20 07:43:05.687377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.510 [2024-11-20 07:43:05.687383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.510 [2024-11-20 07:43:05.692286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.510 [2024-11-20 07:43:05.692304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.510 [2024-11-20 07:43:05.692313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.510 [2024-11-20 07:43:05.702032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.510 [2024-11-20 07:43:05.702050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.510 [2024-11-20 07:43:05.702056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.510 [2024-11-20 07:43:05.713079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.510 [2024-11-20 07:43:05.713098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.510 [2024-11-20 07:43:05.713104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.771 [2024-11-20 07:43:05.720456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.771 [2024-11-20 07:43:05.720475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.771 [2024-11-20 07:43:05.720481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.771 [2024-11-20 07:43:05.728862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.728880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.728886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.739013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.739031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.739038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.749879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.749897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.749903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.761331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.761350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.761356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.773190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.773209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.773216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.785966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.785987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.785993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.797313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.797332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.797338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.805980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.805998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.806004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.815695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.815714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.815721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.820742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.820765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.820771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.825322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.825340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.825347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.832149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.832167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.832173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.836737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.836761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.836767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.843244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.843262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.843269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.852027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.852045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.852051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.858210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.858227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.858234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.867011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.867030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.867036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.874871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.874890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.874896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.879544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.879562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.879568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.887896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.887913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.887920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.890831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.890848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.890854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.897891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.897908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.897914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.908340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.908358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.908368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.912740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.912763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.912769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.920058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.920075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.920082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.927612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.927630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.772 [2024-11-20 07:43:05.927636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.772 [2024-11-20 07:43:05.935722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.772 [2024-11-20 07:43:05.935740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.773 [2024-11-20 07:43:05.935751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.773 [2024-11-20 07:43:05.944567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.773 [2024-11-20 07:43:05.944585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.773 [2024-11-20 07:43:05.944592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.773 [2024-11-20 07:43:05.954526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.773 [2024-11-20 07:43:05.954544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.773 [2024-11-20 07:43:05.954551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.773 [2024-11-20 07:43:05.958999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.773 [2024-11-20 07:43:05.959017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.773 [2024-11-20 07:43:05.959023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.773 [2024-11-20 07:43:05.966362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.773 [2024-11-20 07:43:05.966380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.773 [2024-11-20 07:43:05.966387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.773 [2024-11-20 07:43:05.970619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.773 [2024-11-20 07:43:05.970641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.773 [2024-11-20 07:43:05.970647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.773 [2024-11-20 07:43:05.974722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:47.773 [2024-11-20 07:43:05.974739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.773 [2024-11-20 07:43:05.974750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.034 [2024-11-20 07:43:05.980304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.034 [2024-11-20 07:43:05.980322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.034 [2024-11-20 07:43:05.980328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.034 [2024-11-20 07:43:05.984621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.034 [2024-11-20 07:43:05.984639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.034 [2024-11-20 07:43:05.984645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.034 [2024-11-20 07:43:05.988896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.034 [2024-11-20 07:43:05.988914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.034 [2024-11-20 07:43:05.988920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.034 [2024-11-20 07:43:05.998266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.034 [2024-11-20 07:43:05.998284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.034 [2024-11-20 07:43:05.998290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.034 [2024-11-20 07:43:06.003996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.034 [2024-11-20 07:43:06.004015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.034 [2024-11-20 07:43:06.004021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.034 [2024-11-20 07:43:06.012777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.034 [2024-11-20 07:43:06.012795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.034 [2024-11-20 07:43:06.012802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.034 [2024-11-20 07:43:06.018585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.034 [2024-11-20 07:43:06.018603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.034 [2024-11-20 07:43:06.018609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.034 [2024-11-20 07:43:06.022866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.034 [2024-11-20 07:43:06.022884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.022890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.030706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.030724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.030730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.037700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.037718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.037724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.043696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.043714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.043720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.048078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.048097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.048103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.051973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.051991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.051998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.059590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.059608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.059614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.064824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.064842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.064848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.069998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.070016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.070025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.074766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.074783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.074790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.079439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.079456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.079463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.087505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.087523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.087529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.097387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.097405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.097411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.107451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.107469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.107476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.118387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.118405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.118411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.125699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.125717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.125723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.130890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.130908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.130915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.137496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.137514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.137520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.141927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.141945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.141952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.153044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.153061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.153068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.161403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.161421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.161427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.164661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.164679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.164685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.168641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.168659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.168666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.173014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.173032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.173039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.181981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.181999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.182006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.189686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.189703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.189713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.198896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.035 [2024-11-20 07:43:06.198914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.035 [2024-11-20 07:43:06.198920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.035 [2024-11-20 07:43:06.210282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.036 [2024-11-20 07:43:06.210301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.036 [2024-11-20 07:43:06.210308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.036 [2024-11-20 07:43:06.223023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.036 [2024-11-20 07:43:06.223042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.036 [2024-11-20 07:43:06.223048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.036 [2024-11-20 07:43:06.235361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.036 [2024-11-20 07:43:06.235379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.036 [2024-11-20 07:43:06.235385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.297 [2024-11-20 07:43:06.247840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.297 [2024-11-20 07:43:06.247859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.297 [2024-11-20 07:43:06.247865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.297 [2024-11-20 07:43:06.260356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.297 [2024-11-20 07:43:06.260374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.297 [2024-11-20 07:43:06.260381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.297 [2024-11-20 07:43:06.273145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.297 [2024-11-20 07:43:06.273164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.297 [2024-11-20 07:43:06.273170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.297 [2024-11-20 07:43:06.285179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.297 [2024-11-20 07:43:06.285197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.297 [2024-11-20 07:43:06.285204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.297 [2024-11-20 07:43:06.295143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.297 [2024-11-20 07:43:06.295165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.297 [2024-11-20 07:43:06.295171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.297 [2024-11-20 07:43:06.307513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.297 [2024-11-20 07:43:06.307532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.297 [2024-11-20 07:43:06.307538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.297 [2024-11-20 07:43:06.317242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.297 [2024-11-20 07:43:06.317260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.297 [2024-11-20 07:43:06.317266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.297 [2024-11-20 07:43:06.327667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.297 [2024-11-20 07:43:06.327685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.297 [2024-11-20 07:43:06.327692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.297 [2024-11-20 07:43:06.338737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.297 [2024-11-20 07:43:06.338761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.298 [2024-11-20 07:43:06.338767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.298 [2024-11-20 07:43:06.347771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.298 [2024-11-20 07:43:06.347789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.298 [2024-11-20 07:43:06.347796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.298 [2024-11-20 07:43:06.357450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.298 [2024-11-20 07:43:06.357469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.298 [2024-11-20 07:43:06.357475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.298 [2024-11-20 07:43:06.366839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.298 [2024-11-20 07:43:06.366857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.298 [2024-11-20 07:43:06.366864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.298 [2024-11-20 07:43:06.378564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.298 [2024-11-20 07:43:06.378582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.298 [2024-11-20 07:43:06.378588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.298 [2024-11-20 07:43:06.387813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.298 [2024-11-20 07:43:06.387832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.298 [2024-11-20 07:43:06.387838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.298 [2024-11-20 07:43:06.399285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.298 [2024-11-20 07:43:06.399303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.298 [2024-11-20 07:43:06.399309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.298 [2024-11-20 07:43:06.409515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.298 [2024-11-20 07:43:06.409534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.298 [2024-11-20 07:43:06.409540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.298 [2024-11-20 07:43:06.416849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.298 [2024-11-20 07:43:06.416867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.298 [2024-11-20 07:43:06.416874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.298 [2024-11-20 07:43:06.423533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.298 [2024-11-20 07:43:06.423551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.298 [2024-11-20 07:43:06.423557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.298 [2024-11-20 07:43:06.431999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.298 [2024-11-20 07:43:06.432017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.298 [2024-11-20 07:43:06.432023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.298 [2024-11-20 07:43:06.443258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.298 [2024-11-20 07:43:06.443276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.298 [2024-11-20 07:43:06.443282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.298 3747.00 IOPS, 468.38 MiB/s [2024-11-20T06:43:06.508Z] [2024-11-20 07:43:06.454087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.298 [2024-11-20 07:43:06.454105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.298 [2024-11-20 07:43:06.454112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.298 [2024-11-20 07:43:06.465758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.298 [2024-11-20 07:43:06.465776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.298 [2024-11-20 07:43:06.465787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.298 [2024-11-20 07:43:06.475172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.298 [2024-11-20 07:43:06.475190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.298 [2024-11-20 07:43:06.475197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.298 [2024-11-20 07:43:06.484882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.298 [2024-11-20 07:43:06.484900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.298 [2024-11-20 07:43:06.484907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.298 [2024-11-20 07:43:06.495623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.298 [2024-11-20 07:43:06.495641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.298 [2024-11-20 07:43:06.495647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.559 [2024-11-20 07:43:06.504696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.559 [2024-11-20 07:43:06.504714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.559 [2024-11-20 07:43:06.504720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.559 [2024-11-20 07:43:06.515209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.559 [2024-11-20 07:43:06.515228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.559 [2024-11-20 07:43:06.515234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.559 [2024-11-20 07:43:06.526757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.559 [2024-11-20 07:43:06.526775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.526781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.535421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.535439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.535445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.546428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.546446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.546453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.554733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.554757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.554763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.566124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.566142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.566149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.577183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.577202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.577208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.589661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.589679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.589686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.602076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.602095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.602102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.614722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.614740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.614752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.627618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.627637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.627643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.639232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.639250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.639256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.649535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.649554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.649564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.661537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.661556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.661562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.672789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.672808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.672814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.684219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.684239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.684245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.692518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.692536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.692543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.701625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.701644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.701650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.709209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.709228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.709235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.719186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.719204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.719211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.729048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.729067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.729073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.740517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.740542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.740548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.751240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.751258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.751264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.560 [2024-11-20 07:43:06.762453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.560 [2024-11-20 07:43:06.762471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.560 [2024-11-20 07:43:06.762477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.821 [2024-11-20 07:43:06.774617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.821 [2024-11-20 07:43:06.774635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.821 [2024-11-20 07:43:06.774641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.821 [2024-11-20 07:43:06.786276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.821 [2024-11-20 07:43:06.786295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.821 [2024-11-20 07:43:06.786302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.821 [2024-11-20 07:43:06.797930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.821 [2024-11-20 07:43:06.797948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.821 [2024-11-20 07:43:06.797955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.821 [2024-11-20 07:43:06.809425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.821 [2024-11-20 07:43:06.809443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.821 [2024-11-20 07:43:06.809450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.821 [2024-11-20 07:43:06.821321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.821 [2024-11-20 07:43:06.821339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.821 [2024-11-20 07:43:06.821346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.821 [2024-11-20 07:43:06.833810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.821 [2024-11-20 07:43:06.833829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.821 [2024-11-20 07:43:06.833836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.821 [2024-11-20 07:43:06.845625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.821 [2024-11-20 07:43:06.845644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.821 [2024-11-20 07:43:06.845650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.821 [2024-11-20 07:43:06.857218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.821 [2024-11-20 07:43:06.857237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.821 [2024-11-20 07:43:06.857243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.821 [2024-11-20 07:43:06.868108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.821 [2024-11-20 07:43:06.868126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.821 [2024-11-20 07:43:06.868133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.821 [2024-11-20 07:43:06.880966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.821 [2024-11-20 07:43:06.880984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.822 [2024-11-20 07:43:06.880991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.822 [2024-11-20 07:43:06.891970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.822 [2024-11-20 07:43:06.891989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.822 [2024-11-20 07:43:06.891995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.822 [2024-11-20 07:43:06.902402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.822 [2024-11-20 07:43:06.902420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.822 [2024-11-20 07:43:06.902427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.822 [2024-11-20 07:43:06.914862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.822 [2024-11-20 07:43:06.914880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.822 [2024-11-20 07:43:06.914886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.822 [2024-11-20 07:43:06.927217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.822 [2024-11-20 07:43:06.927235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.822 [2024-11-20 07:43:06.927241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.822 [2024-11-20 07:43:06.938432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.822 [2024-11-20 07:43:06.938450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.822 [2024-11-20 07:43:06.938459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.822 [2024-11-20 07:43:06.949502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.822 [2024-11-20 07:43:06.949520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.822 [2024-11-20 07:43:06.949526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.822 [2024-11-20 07:43:06.961403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.822 [2024-11-20 07:43:06.961422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.822 [2024-11-20 07:43:06.961428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.822 [2024-11-20 07:43:06.970887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.822 [2024-11-20 07:43:06.970906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.822 [2024-11-20 07:43:06.970912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.822 [2024-11-20 07:43:06.982000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.822 [2024-11-20 07:43:06.982019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.822 [2024-11-20 07:43:06.982025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.822 [2024-11-20 07:43:06.992850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.822 [2024-11-20 07:43:06.992869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.822 [2024-11-20 07:43:06.992875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.822 [2024-11-20 07:43:07.004908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.822 [2024-11-20 07:43:07.004925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.822 [2024-11-20 07:43:07.004932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.822 [2024-11-20 07:43:07.014066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.822 [2024-11-20 07:43:07.014085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.822 [2024-11-20 07:43:07.014091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.822 [2024-11-20 07:43:07.025283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:48.822 [2024-11-20 07:43:07.025301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.822 [2024-11-20 07:43:07.025307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.083 [2024-11-20 07:43:07.035884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.083 [2024-11-20 07:43:07.035906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.083 [2024-11-20 07:43:07.035912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.083 [2024-11-20 07:43:07.046984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.083 [2024-11-20 07:43:07.047003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.083 [2024-11-20 07:43:07.047009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.083 [2024-11-20 07:43:07.056392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.083 [2024-11-20 07:43:07.056411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.083 [2024-11-20 07:43:07.056417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.083 [2024-11-20 07:43:07.067005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.083 [2024-11-20 07:43:07.067024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.083 [2024-11-20 07:43:07.067030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.083 [2024-11-20 07:43:07.075743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.083 [2024-11-20 07:43:07.075767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.083 [2024-11-20 07:43:07.075773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.083 [2024-11-20 07:43:07.087380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.083 [2024-11-20 07:43:07.087399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.083 [2024-11-20 07:43:07.087405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.083 [2024-11-20 07:43:07.099094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.083 [2024-11-20 07:43:07.099113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.083 [2024-11-20 07:43:07.099119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.083 [2024-11-20 07:43:07.111084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.083 [2024-11-20 07:43:07.111102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.083 [2024-11-20 07:43:07.111108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.083 [2024-11-20 07:43:07.123918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.084 [2024-11-20 07:43:07.123936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.084 [2024-11-20 07:43:07.123942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.084 [2024-11-20 07:43:07.135799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.084 [2024-11-20 07:43:07.135816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.084 [2024-11-20 07:43:07.135824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.084 [2024-11-20 07:43:07.147304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.084 [2024-11-20 07:43:07.147323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.084 [2024-11-20 07:43:07.147329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.084 [2024-11-20 07:43:07.159753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.084 [2024-11-20 07:43:07.159771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.084 [2024-11-20 07:43:07.159778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.084 [2024-11-20 07:43:07.169927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.084 [2024-11-20 07:43:07.169946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.084 [2024-11-20 07:43:07.169952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.084 [2024-11-20 07:43:07.180322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.084 [2024-11-20 07:43:07.180340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.084 [2024-11-20 07:43:07.180347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.084 [2024-11-20 07:43:07.191570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.084 [2024-11-20 07:43:07.191588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.084 [2024-11-20 07:43:07.191594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.084 [2024-11-20 07:43:07.202355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.084 [2024-11-20 07:43:07.202373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.084 [2024-11-20 07:43:07.202380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.084 [2024-11-20 07:43:07.210861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.084 [2024-11-20 07:43:07.210880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.084 [2024-11-20 07:43:07.210886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.084 [2024-11-20 07:43:07.220256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.084 [2024-11-20 07:43:07.220276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.084 [2024-11-20 07:43:07.220285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.084 [2024-11-20 07:43:07.230517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.084 [2024-11-20 07:43:07.230534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.084 [2024-11-20 07:43:07.230540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.084 [2024-11-20 07:43:07.240995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.084 [2024-11-20 07:43:07.241014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.084 [2024-11-20 07:43:07.241020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.084 [2024-11-20 07:43:07.252305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.084 [2024-11-20 07:43:07.252324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.084 [2024-11-20 07:43:07.252330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.084 [2024-11-20 07:43:07.263223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.084 [2024-11-20 07:43:07.263242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.084 [2024-11-20 07:43:07.263248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.084 [2024-11-20 07:43:07.273673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.084 [2024-11-20 07:43:07.273692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.084 [2024-11-20 07:43:07.273698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.084 [2024-11-20 07:43:07.284922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.084 [2024-11-20 07:43:07.284941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.084 [2024-11-20 07:43:07.284947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.345 [2024-11-20 07:43:07.295383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.345 [2024-11-20 07:43:07.295402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.345 [2024-11-20 07:43:07.295409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.345 [2024-11-20 07:43:07.306131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.345 [2024-11-20 07:43:07.306148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.345 [2024-11-20 07:43:07.306154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.345 [2024-11-20 07:43:07.317069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.345 [2024-11-20 07:43:07.317088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.345 [2024-11-20 07:43:07.317094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.345 [2024-11-20 07:43:07.327911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.345 [2024-11-20 07:43:07.327930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.345 [2024-11-20 07:43:07.327937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.345 [2024-11-20 07:43:07.338416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.345 [2024-11-20 07:43:07.338433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.345 [2024-11-20 07:43:07.338440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.345 [2024-11-20 07:43:07.348625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.345 [2024-11-20 07:43:07.348644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.345 [2024-11-20 07:43:07.348650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.345 [2024-11-20 07:43:07.359831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.345 [2024-11-20 07:43:07.359849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.345 [2024-11-20 07:43:07.359855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.345 [2024-11-20 07:43:07.366715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.345 [2024-11-20 07:43:07.366732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.345 [2024-11-20 07:43:07.366738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.345 [2024-11-20 07:43:07.376235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.345 [2024-11-20 07:43:07.376253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.345 [2024-11-20 07:43:07.376259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.345 [2024-11-20 07:43:07.387777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.345 [2024-11-20 07:43:07.387795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.345 [2024-11-20 07:43:07.387801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.345 [2024-11-20 07:43:07.396966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.345 [2024-11-20 07:43:07.396984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.345 [2024-11-20 07:43:07.396993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.345 [2024-11-20 07:43:07.408354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.345 [2024-11-20 07:43:07.408373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.345 [2024-11-20 07:43:07.408379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.345 [2024-11-20 07:43:07.419600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.345 [2024-11-20 07:43:07.419618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.345 [2024-11-20 07:43:07.419624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.345 [2024-11-20 07:43:07.431598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.345 [2024-11-20 07:43:07.431617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.345 [2024-11-20 07:43:07.431623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.345 [2024-11-20 07:43:07.443405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.345 [2024-11-20 07:43:07.443424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.345 [2024-11-20 07:43:07.443430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.345 3295.00 IOPS, 411.88 MiB/s [2024-11-20T06:43:07.555Z] [2024-11-20 07:43:07.454660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3a60) 00:28:49.345 [2024-11-20 07:43:07.454678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.345 [2024-11-20 07:43:07.454684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.345 00:28:49.345 Latency(us) 00:28:49.345 [2024-11-20T06:43:07.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.345 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:49.345 nvme0n1 : 2.00 3296.83 412.10 0.00 0.00 4848.56 593.92 13107.20 00:28:49.345 [2024-11-20T06:43:07.555Z] =================================================================================================================== 00:28:49.345 [2024-11-20T06:43:07.555Z] Total : 3296.83 412.10 0.00 0.00 4848.56 593.92 13107.20 00:28:49.345 { 00:28:49.345 "results": [ 00:28:49.345 { 00:28:49.345 "job": "nvme0n1", 00:28:49.345 "core_mask": "0x2", 00:28:49.345 "workload": "randread", 00:28:49.345 "status": "finished", 00:28:49.345 "queue_depth": 16, 00:28:49.345 "io_size": 131072, 00:28:49.345 "runtime": 2.003744, 00:28:49.345 "iops": 3296.828337352476, 00:28:49.345 "mibps": 412.1035421690595, 00:28:49.345 "io_failed": 0, 00:28:49.345 "io_timeout": 0, 00:28:49.345 "avg_latency_us": 4848.56146129781, 00:28:49.345 "min_latency_us": 593.92, 00:28:49.345 "max_latency_us": 13107.2 00:28:49.345 } 00:28:49.345 ], 00:28:49.345 "core_count": 1 00:28:49.345 } 00:28:49.345 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:49.345 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:49.345 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:49.345 | .driver_specific 00:28:49.345 | .nvme_error 00:28:49.345 | .status_code 00:28:49.345 | .command_transient_transport_error' 00:28:49.345 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:49.605 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:28:49.605 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3575593 00:28:49.605 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3575593 ']' 00:28:49.605 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3575593 00:28:49.605 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:49.605 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:49.605 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3575593 00:28:49.605 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:49.605 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:49.605 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3575593' 00:28:49.605 killing process with pid 3575593 00:28:49.606 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3575593 00:28:49.606 Received shutdown signal, test time was about 2.000000 seconds 00:28:49.606 00:28:49.606 Latency(us) 00:28:49.606 [2024-11-20T06:43:07.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.606 [2024-11-20T06:43:07.816Z] =================================================================================================================== 00:28:49.606 [2024-11-20T06:43:07.816Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:49.606 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3575593 00:28:49.866 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:49.866 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:49.866 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:49.866 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:49.866 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:49.866 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3576271 00:28:49.866 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3576271 /var/tmp/bperf.sock 00:28:49.866 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3576271 ']' 00:28:49.866 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:49.866 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:49.866 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:49.866 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:49.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:49.866 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:49.866 07:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:49.866 [2024-11-20 07:43:07.900207] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:28:49.866 [2024-11-20 07:43:07.900277] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3576271 ] 00:28:49.866 [2024-11-20 07:43:07.987032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.866 [2024-11-20 07:43:08.015962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.809 07:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:50.809 07:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:50.809 07:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:50.809 07:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:50.809 07:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:50.809 07:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.809 07:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:50.809 07:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.809 07:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.809 07:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.069 nvme0n1 00:28:51.070 07:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:51.070 07:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.070 07:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:51.070 07:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.070 07:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:51.070 07:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:51.070 Running I/O for 2 seconds... 00:28:51.070 [2024-11-20 07:43:09.209144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef7538 00:28:51.070 [2024-11-20 07:43:09.209917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.070 [2024-11-20 07:43:09.209945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:51.070 [2024-11-20 07:43:09.217811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef1868 00:28:51.070 [2024-11-20 07:43:09.218581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.070 [2024-11-20 07:43:09.218601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:51.070 [2024-11-20 07:43:09.226276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee4de8 00:28:51.070 [2024-11-20 07:43:09.227038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.070 [2024-11-20 07:43:09.227060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:51.070 [2024-11-20 07:43:09.234726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef7538 00:28:51.070 [2024-11-20 07:43:09.235486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.070 [2024-11-20 07:43:09.235503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:51.070 [2024-11-20 07:43:09.243195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef1868 00:28:51.070 [2024-11-20 07:43:09.243951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.070 [2024-11-20 07:43:09.243967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:51.070 [2024-11-20 07:43:09.251648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee4de8 00:28:51.070 [2024-11-20 07:43:09.252414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.070 [2024-11-20 07:43:09.252431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:51.070 [2024-11-20 07:43:09.260097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef7538 00:28:51.070 [2024-11-20 07:43:09.260809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.070 [2024-11-20 07:43:09.260825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:51.070 [2024-11-20 07:43:09.268284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef6cc8 00:28:51.070 [2024-11-20 07:43:09.269030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.070 [2024-11-20 07:43:09.269046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.277139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee8088 00:28:51.330 [2024-11-20 07:43:09.277878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.277895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.285579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee6fa8 00:28:51.330 [2024-11-20 07:43:09.286325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.286342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.293988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee5ec8 00:28:51.330 [2024-11-20 07:43:09.294725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.294741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.302408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee4de8 00:28:51.330 [2024-11-20 07:43:09.303171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.303187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.310942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee3d08 00:28:51.330 [2024-11-20 07:43:09.311701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.311716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.319366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee2c28 00:28:51.330 [2024-11-20 07:43:09.320121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.320137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.327797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eebb98 00:28:51.330 [2024-11-20 07:43:09.328558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.328575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.336230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eecc78 00:28:51.330 [2024-11-20 07:43:09.336966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.336982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.344632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eedd58 00:28:51.330 [2024-11-20 07:43:09.345377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.345393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.353035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:51.330 [2024-11-20 07:43:09.353769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.353785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.361453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeff18 00:28:51.330 [2024-11-20 07:43:09.362194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.362210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.369849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef0ff8 00:28:51.330 [2024-11-20 07:43:09.370547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.370562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.378254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efa7d8 00:28:51.330 [2024-11-20 07:43:09.378999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.379014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.386639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efb480 00:28:51.330 [2024-11-20 07:43:09.387398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.387414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.395053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eea680 00:28:51.330 [2024-11-20 07:43:09.395801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.395818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.403465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee95a0 00:28:51.330 [2024-11-20 07:43:09.404219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.404235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.411884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee84c0 00:28:51.330 [2024-11-20 07:43:09.412627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.412643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.420284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee73e0 00:28:51.330 [2024-11-20 07:43:09.421006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.421021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.428687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee6300 00:28:51.330 [2024-11-20 07:43:09.429434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.429450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.436527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efb048 00:28:51.330 [2024-11-20 07:43:09.437230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.437245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.445778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eddc00 00:28:51.330 [2024-11-20 07:43:09.446636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.446655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.454188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efda78 00:28:51.330 [2024-11-20 07:43:09.455008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.455024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.462739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efd208 00:28:51.330 [2024-11-20 07:43:09.463602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.463618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.471135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efc128 00:28:51.330 [2024-11-20 07:43:09.471992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.330 [2024-11-20 07:43:09.472008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.330 [2024-11-20 07:43:09.479545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee1710 00:28:51.330 [2024-11-20 07:43:09.480428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.331 [2024-11-20 07:43:09.480444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.331 [2024-11-20 07:43:09.487958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ede8a8 00:28:51.331 [2024-11-20 07:43:09.488817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.331 [2024-11-20 07:43:09.488832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.331 [2024-11-20 07:43:09.496368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef0788 00:28:51.331 [2024-11-20 07:43:09.497232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.331 [2024-11-20 07:43:09.497247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.331 [2024-11-20 07:43:09.504770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef1868 00:28:51.331 [2024-11-20 07:43:09.505643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.331 [2024-11-20 07:43:09.505659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.331 [2024-11-20 07:43:09.514872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eec840 00:28:51.331 [2024-11-20 07:43:09.516189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.331 [2024-11-20 07:43:09.516204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:51.331 [2024-11-20 07:43:09.521718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eecc78 00:28:51.331 [2024-11-20 07:43:09.522356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.331 [2024-11-20 07:43:09.522372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:51.331 [2024-11-20 07:43:09.530058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef9b30 00:28:51.331 [2024-11-20 07:43:09.530570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.331 [2024-11-20 07:43:09.530586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:51.591 [2024-11-20 07:43:09.538473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee38d0 00:28:51.591 [2024-11-20 07:43:09.539098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.591 [2024-11-20 07:43:09.539114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:51.591 [2024-11-20 07:43:09.548016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eecc78 00:28:51.591 [2024-11-20 07:43:09.549207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.591 [2024-11-20 07:43:09.549222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:51.591 [2024-11-20 07:43:09.556687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef3a28 00:28:51.591 [2024-11-20 07:43:09.557872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.591 [2024-11-20 07:43:09.557888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:51.591 [2024-11-20 07:43:09.565088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee1b48 00:28:51.591 [2024-11-20 07:43:09.566285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.591 [2024-11-20 07:43:09.566300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:51.591 [2024-11-20 07:43:09.573496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee7c50 00:28:51.591 [2024-11-20 07:43:09.574685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.591 [2024-11-20 07:43:09.574700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:51.591 [2024-11-20 07:43:09.581918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef3a28 00:28:51.591 [2024-11-20 07:43:09.583105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.591 [2024-11-20 07:43:09.583121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:51.591 [2024-11-20 07:43:09.590355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee1b48 00:28:51.591 [2024-11-20 07:43:09.591546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.591 [2024-11-20 07:43:09.591562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:51.591 [2024-11-20 07:43:09.597299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee84c0 00:28:51.591 [2024-11-20 07:43:09.598048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.591 [2024-11-20 07:43:09.598063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.591 [2024-11-20 07:43:09.605688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef57b0 00:28:51.591 [2024-11-20 07:43:09.606438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.591 [2024-11-20 07:43:09.606453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.591 [2024-11-20 07:43:09.614118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef46d0 00:28:51.591 [2024-11-20 07:43:09.614816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.591 [2024-11-20 07:43:09.614831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.591 [2024-11-20 07:43:09.622543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef6458 00:28:51.591 [2024-11-20 07:43:09.623273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.591 [2024-11-20 07:43:09.623288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.630960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efa7d8 00:28:51.592 [2024-11-20 07:43:09.631698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.631714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.639369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efb480 00:28:51.592 [2024-11-20 07:43:09.640101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.640117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.647774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eea248 00:28:51.592 [2024-11-20 07:43:09.648527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.648542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.656165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef96f8 00:28:51.592 [2024-11-20 07:43:09.656889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.656905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.664611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef8618 00:28:51.592 [2024-11-20 07:43:09.665322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.665340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.673042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee4de8 00:28:51.592 [2024-11-20 07:43:09.673786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.673802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.681460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee7c50 00:28:51.592 [2024-11-20 07:43:09.682201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.682217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.689864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee6b70 00:28:51.592 [2024-11-20 07:43:09.690596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.690612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.698257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee5a90 00:28:51.592 [2024-11-20 07:43:09.698996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.699011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.706651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:51.592 [2024-11-20 07:43:09.707361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.707376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.715080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eedd58 00:28:51.592 [2024-11-20 07:43:09.715812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.715828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.723500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eecc78 00:28:51.592 [2024-11-20 07:43:09.724268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.724283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.731944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee9168 00:28:51.592 [2024-11-20 07:43:09.732638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.732654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.740345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef5be8 00:28:51.592 [2024-11-20 07:43:09.741068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.741087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.748742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef4b08 00:28:51.592 [2024-11-20 07:43:09.749471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.749486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.757154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef3a28 00:28:51.592 [2024-11-20 07:43:09.757888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.757904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.765565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efa3a0 00:28:51.592 [2024-11-20 07:43:09.766306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.766323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.773973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efb8b8 00:28:51.592 [2024-11-20 07:43:09.774716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.774731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.782367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee9e10 00:28:51.592 [2024-11-20 07:43:09.783119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.783136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.592 [2024-11-20 07:43:09.790759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef92c0 00:28:51.592 [2024-11-20 07:43:09.791477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.592 [2024-11-20 07:43:09.791493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:51.853 [2024-11-20 07:43:09.799420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efe720 00:28:51.854 [2024-11-20 07:43:09.799986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.800003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.807722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef7970 00:28:51.854 [2024-11-20 07:43:09.808238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.808254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.816437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eff3c8 00:28:51.854 [2024-11-20 07:43:09.817277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.817293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.824855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef1868 00:28:51.854 [2024-11-20 07:43:09.825660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.825676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.833247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef0788 00:28:51.854 [2024-11-20 07:43:09.834078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.834094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.841634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ede8a8 00:28:51.854 [2024-11-20 07:43:09.842477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.842492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.850016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee1710 00:28:51.854 [2024-11-20 07:43:09.850831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.850847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.858434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efd208 00:28:51.854 [2024-11-20 07:43:09.859278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.859294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.866868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016edece0 00:28:51.854 [2024-11-20 07:43:09.867714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.867730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.875264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee27f0 00:28:51.854 [2024-11-20 07:43:09.876090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.876106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.883644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef7100 00:28:51.854 [2024-11-20 07:43:09.884480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.884495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.892042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efb048 00:28:51.854 [2024-11-20 07:43:09.892866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.892882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.900471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eff3c8 00:28:51.854 [2024-11-20 07:43:09.901316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.901331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.908912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef0788 00:28:51.854 [2024-11-20 07:43:09.909751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.909767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.917377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee1710 00:28:51.854 [2024-11-20 07:43:09.918216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.918232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.925798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efd208 00:28:51.854 [2024-11-20 07:43:09.926642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.926657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.934210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016edece0 00:28:51.854 [2024-11-20 07:43:09.935055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.935071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.942642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee27f0 00:28:51.854 [2024-11-20 07:43:09.943484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.943499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.951091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef7100 00:28:51.854 [2024-11-20 07:43:09.951883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.951899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.959515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efb048 00:28:51.854 [2024-11-20 07:43:09.960349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.960367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.967908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eff3c8 00:28:51.854 [2024-11-20 07:43:09.968731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.968750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.976328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef0788 00:28:51.854 [2024-11-20 07:43:09.977158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.854 [2024-11-20 07:43:09.977174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.854 [2024-11-20 07:43:09.984724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee1710 00:28:51.855 [2024-11-20 07:43:09.985547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.855 [2024-11-20 07:43:09.985563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.855 [2024-11-20 07:43:09.993152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efd208 00:28:51.855 [2024-11-20 07:43:09.993960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.855 [2024-11-20 07:43:09.993976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.855 [2024-11-20 07:43:10.002060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016edece0 00:28:51.855 [2024-11-20 07:43:10.002919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.855 [2024-11-20 07:43:10.002936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.855 [2024-11-20 07:43:10.010488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee27f0 00:28:51.855 [2024-11-20 07:43:10.011332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.855 [2024-11-20 07:43:10.011348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.855 [2024-11-20 07:43:10.018906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef7100 00:28:51.855 [2024-11-20 07:43:10.019749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.855 [2024-11-20 07:43:10.019765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.855 [2024-11-20 07:43:10.027817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efb048 00:28:51.855 [2024-11-20 07:43:10.028659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.855 [2024-11-20 07:43:10.028674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.855 [2024-11-20 07:43:10.036226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eff3c8 00:28:51.855 [2024-11-20 07:43:10.037041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.855 [2024-11-20 07:43:10.037057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.855 [2024-11-20 07:43:10.044662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef0788 00:28:51.855 [2024-11-20 07:43:10.045490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.855 [2024-11-20 07:43:10.045506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:51.855 [2024-11-20 07:43:10.053139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee1710 00:28:51.855 [2024-11-20 07:43:10.053978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.855 [2024-11-20 07:43:10.053994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:52.116 [2024-11-20 07:43:10.061552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efd208 00:28:52.116 [2024-11-20 07:43:10.062389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.116 [2024-11-20 07:43:10.062404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:52.116 [2024-11-20 07:43:10.069967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016edece0 00:28:52.116 [2024-11-20 07:43:10.070806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.116 [2024-11-20 07:43:10.070822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:52.116 [2024-11-20 07:43:10.078385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee27f0 00:28:52.116 [2024-11-20 07:43:10.079192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.116 [2024-11-20 07:43:10.079207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:52.116 [2024-11-20 07:43:10.086797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef7100 00:28:52.116 [2024-11-20 07:43:10.087646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.116 [2024-11-20 07:43:10.087662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:52.116 [2024-11-20 07:43:10.095246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efb048 00:28:52.116 [2024-11-20 07:43:10.096085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.116 [2024-11-20 07:43:10.096101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:52.116 [2024-11-20 07:43:10.103659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eff3c8 00:28:52.116 [2024-11-20 07:43:10.104495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.104511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.112525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eed4e8 00:28:52.117 [2024-11-20 07:43:10.113196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.113212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.120846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef2d80 00:28:52.117 [2024-11-20 07:43:10.121434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.121450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.129258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eed4e8 00:28:52.117 [2024-11-20 07:43:10.129882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.129897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.137689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef2d80 00:28:52.117 [2024-11-20 07:43:10.138301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.138317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.146388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef92c0 00:28:52.117 [2024-11-20 07:43:10.147367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.147383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.154803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016edfdc0 00:28:52.117 [2024-11-20 07:43:10.155775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.155790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.163221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee23b8 00:28:52.117 [2024-11-20 07:43:10.164184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.164200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.171632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee3498 00:28:52.117 [2024-11-20 07:43:10.172595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.172611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.180086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee4578 00:28:52.117 [2024-11-20 07:43:10.181035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.181055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.188518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eed4e8 00:28:52.117 [2024-11-20 07:43:10.189469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.189485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.196946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eee5c8 00:28:52.117 [2024-11-20 07:43:10.197929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.197945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:52.117 30195.00 IOPS, 117.95 MiB/s [2024-11-20T06:43:10.327Z] [2024-11-20 07:43:10.205358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.117 [2024-11-20 07:43:10.206462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.206478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.213935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.117 [2024-11-20 07:43:10.214915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.214930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.222373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.117 [2024-11-20 07:43:10.223302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.223318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.230803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.117 [2024-11-20 07:43:10.231728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.231743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.239203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.117 [2024-11-20 07:43:10.240165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.240180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.247613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.117 [2024-11-20 07:43:10.248537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.248552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.256021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.117 [2024-11-20 07:43:10.256999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.257015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.264455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.117 [2024-11-20 07:43:10.265441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.265456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.273075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.117 [2024-11-20 07:43:10.274025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.274041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.281501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.117 [2024-11-20 07:43:10.282465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.282481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.289906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.117 [2024-11-20 07:43:10.290864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.290880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.298317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.117 [2024-11-20 07:43:10.299281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.299296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.306730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.117 [2024-11-20 07:43:10.307714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.307729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.117 [2024-11-20 07:43:10.315164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.117 [2024-11-20 07:43:10.316242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.117 [2024-11-20 07:43:10.316257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.378 [2024-11-20 07:43:10.323687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.378 [2024-11-20 07:43:10.324668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.378 [2024-11-20 07:43:10.324684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.378 [2024-11-20 07:43:10.332110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.378 [2024-11-20 07:43:10.333060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.378 [2024-11-20 07:43:10.333076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.378 [2024-11-20 07:43:10.340509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.378 [2024-11-20 07:43:10.341477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.378 [2024-11-20 07:43:10.341493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.378 [2024-11-20 07:43:10.348920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.378 [2024-11-20 07:43:10.349879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.378 [2024-11-20 07:43:10.349894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.378 [2024-11-20 07:43:10.357348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.358278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.358294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.365809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.366767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.366783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.374231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.375213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.375229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.382627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.383593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.383609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.391024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.391995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.392010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.399429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.400406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.400424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.407848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.408805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.408821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.416275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.417242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.417259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.424699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.425678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.425695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.433120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.434083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.434099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.441526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.442492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.442507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.449954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.450918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.450934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.458381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.459350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.459366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.466813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.467757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.467774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.475218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.476151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.476168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.483632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.484579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.484594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.492042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.493028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.493044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.500473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.501455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.501471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.508897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.509877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.509893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.517325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.518305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.518321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.525718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.526711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.526727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.534130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.535110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.535126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.542554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.543497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.543513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.550997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.551973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.551989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.559411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeee38 00:28:52.379 [2024-11-20 07:43:10.560371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.560387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.567249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef7538 00:28:52.379 [2024-11-20 07:43:10.568192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.568207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:52.379 [2024-11-20 07:43:10.576667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee1710 00:28:52.379 [2024-11-20 07:43:10.577734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.379 [2024-11-20 07:43:10.577756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.641 [2024-11-20 07:43:10.585096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eec840 00:28:52.641 [2024-11-20 07:43:10.586170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.641 [2024-11-20 07:43:10.586186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.641 [2024-11-20 07:43:10.593522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef1ca0 00:28:52.641 [2024-11-20 07:43:10.594602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.641 [2024-11-20 07:43:10.594618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.641 [2024-11-20 07:43:10.601942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efbcf0 00:28:52.641 [2024-11-20 07:43:10.603014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.641 [2024-11-20 07:43:10.603030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.641 [2024-11-20 07:43:10.610338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eddc00 00:28:52.641 [2024-11-20 07:43:10.611422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.641 [2024-11-20 07:43:10.611438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.641 [2024-11-20 07:43:10.618748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef0350 00:28:52.641 [2024-11-20 07:43:10.619832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.641 [2024-11-20 07:43:10.619851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.641 [2024-11-20 07:43:10.627165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef35f0 00:28:52.641 [2024-11-20 07:43:10.628237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.641 [2024-11-20 07:43:10.628253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.641 [2024-11-20 07:43:10.635579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee49b0 00:28:52.641 [2024-11-20 07:43:10.636673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.641 [2024-11-20 07:43:10.636688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.641 [2024-11-20 07:43:10.644005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee8088 00:28:52.641 [2024-11-20 07:43:10.645070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.641 [2024-11-20 07:43:10.645086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.641 [2024-11-20 07:43:10.652418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee6fa8 00:28:52.641 [2024-11-20 07:43:10.653511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.641 [2024-11-20 07:43:10.653527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.641 [2024-11-20 07:43:10.660834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee99d8 00:28:52.641 [2024-11-20 07:43:10.661921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.661937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.669237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee88f8 00:28:52.642 [2024-11-20 07:43:10.670326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.670342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.677663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef5378 00:28:52.642 [2024-11-20 07:43:10.678750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.678766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.686090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee6b70 00:28:52.642 [2024-11-20 07:43:10.687171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.687188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.694520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee5a90 00:28:52.642 [2024-11-20 07:43:10.695615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.695631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.702936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee0630 00:28:52.642 [2024-11-20 07:43:10.704032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.704048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.711342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef1430 00:28:52.642 [2024-11-20 07:43:10.712414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.712430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.719778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efb8b8 00:28:52.642 [2024-11-20 07:43:10.720716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.720732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.728195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efa3a0 00:28:52.642 [2024-11-20 07:43:10.729288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.729304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.736620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eed0b0 00:28:52.642 [2024-11-20 07:43:10.737713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.737729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.745055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee1f80 00:28:52.642 [2024-11-20 07:43:10.746148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.746164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.753616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ede038 00:28:52.642 [2024-11-20 07:43:10.754706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.754722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.762040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeff18 00:28:52.642 [2024-11-20 07:43:10.763133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.763149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.770468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eea680 00:28:52.642 [2024-11-20 07:43:10.771562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.771579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.778919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee4de8 00:28:52.642 [2024-11-20 07:43:10.779993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.780009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.787349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee7c50 00:28:52.642 [2024-11-20 07:43:10.788435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.788451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.795787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef4298 00:28:52.642 [2024-11-20 07:43:10.796878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.796894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.804194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee84c0 00:28:52.642 [2024-11-20 07:43:10.805273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.805289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.812665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef57b0 00:28:52.642 [2024-11-20 07:43:10.813769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.813785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.821116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef46d0 00:28:52.642 [2024-11-20 07:43:10.822205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.822222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.829547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee5ec8 00:28:52.642 [2024-11-20 07:43:10.830625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.830641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.642 [2024-11-20 07:43:10.837959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eefae0 00:28:52.642 [2024-11-20 07:43:10.839046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.642 [2024-11-20 07:43:10.839065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.903 [2024-11-20 07:43:10.846372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ede8a8 00:28:52.903 [2024-11-20 07:43:10.847452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.903 [2024-11-20 07:43:10.847468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.903 [2024-11-20 07:43:10.854771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee1710 00:28:52.903 [2024-11-20 07:43:10.855832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.903 [2024-11-20 07:43:10.855848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.903 [2024-11-20 07:43:10.863187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eec840 00:28:52.903 [2024-11-20 07:43:10.864272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.903 [2024-11-20 07:43:10.864288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.903 [2024-11-20 07:43:10.871603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef1ca0 00:28:52.903 [2024-11-20 07:43:10.872682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.903 [2024-11-20 07:43:10.872698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:10.880031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efbcf0 00:28:52.904 [2024-11-20 07:43:10.881116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:10.881132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:10.888444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eddc00 00:28:52.904 [2024-11-20 07:43:10.889524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:10.889540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:10.896854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef0350 00:28:52.904 [2024-11-20 07:43:10.897888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:10.897904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:10.905260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef35f0 00:28:52.904 [2024-11-20 07:43:10.906341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:10.906358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:10.913686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee49b0 00:28:52.904 [2024-11-20 07:43:10.914760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:10.914776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:10.922114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee8088 00:28:52.904 [2024-11-20 07:43:10.923217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:10.923233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:10.930540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee6fa8 00:28:52.904 [2024-11-20 07:43:10.931614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:10.931630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:10.938965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee99d8 00:28:52.904 [2024-11-20 07:43:10.940089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:10.940105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:10.947420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee88f8 00:28:52.904 [2024-11-20 07:43:10.948499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:10.948515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:10.955870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef5378 00:28:52.904 [2024-11-20 07:43:10.956908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:10.956924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:10.964298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee6b70 00:28:52.904 [2024-11-20 07:43:10.965391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:10.965407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:10.972733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee5a90 00:28:52.904 [2024-11-20 07:43:10.973827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:10.973843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:10.981148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee0630 00:28:52.904 [2024-11-20 07:43:10.982244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:10.982260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:10.989573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef1430 00:28:52.904 [2024-11-20 07:43:10.990655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:10.990671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:10.997977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efb8b8 00:28:52.904 [2024-11-20 07:43:10.999068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:10.999084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:11.006442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efa3a0 00:28:52.904 [2024-11-20 07:43:11.007521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:11.007537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:11.014914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eed0b0 00:28:52.904 [2024-11-20 07:43:11.015975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:11.015992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:11.023309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee1f80 00:28:52.904 [2024-11-20 07:43:11.024370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:11.024386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:11.031733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ede038 00:28:52.904 [2024-11-20 07:43:11.032806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:11.032822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:11.040152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeff18 00:28:52.904 [2024-11-20 07:43:11.041252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:11.041268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:11.048576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eea680 00:28:52.904 [2024-11-20 07:43:11.049662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:11.049677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.904 [2024-11-20 07:43:11.056992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee4de8 00:28:52.904 [2024-11-20 07:43:11.058086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.904 [2024-11-20 07:43:11.058104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.905 [2024-11-20 07:43:11.065395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee7c50 00:28:52.905 [2024-11-20 07:43:11.066492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.905 [2024-11-20 07:43:11.066508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:52.905 [2024-11-20 07:43:11.073063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee27f0 00:28:52.905 [2024-11-20 07:43:11.074435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.905 [2024-11-20 07:43:11.074451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:52.905 [2024-11-20 07:43:11.080840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef3a28 00:28:52.905 [2024-11-20 07:43:11.081560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.905 [2024-11-20 07:43:11.081575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:52.905 [2024-11-20 07:43:11.089410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef8e88 00:28:52.905 [2024-11-20 07:43:11.090098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.905 [2024-11-20 07:43:11.090114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:52.905 [2024-11-20 07:43:11.097843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efdeb0 00:28:52.905 [2024-11-20 07:43:11.098567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.905 [2024-11-20 07:43:11.098583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:52.905 [2024-11-20 07:43:11.106265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ef2948 00:28:52.905 [2024-11-20 07:43:11.107004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.905 [2024-11-20 07:43:11.107020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:53.166 [2024-11-20 07:43:11.114665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eea248 00:28:53.166 [2024-11-20 07:43:11.115396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.166 [2024-11-20 07:43:11.115412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:53.166 [2024-11-20 07:43:11.123070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee01f8 00:28:53.166 [2024-11-20 07:43:11.123755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.166 [2024-11-20 07:43:11.123771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:53.166 [2024-11-20 07:43:11.131469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeb760 00:28:53.166 [2024-11-20 07:43:11.132210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.166 [2024-11-20 07:43:11.132226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:53.166 [2024-11-20 07:43:11.139899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee3060 00:28:53.166 [2024-11-20 07:43:11.140581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.166 [2024-11-20 07:43:11.140597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:53.166 [2024-11-20 07:43:11.148320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee4140 00:28:53.166 [2024-11-20 07:43:11.149048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.166 [2024-11-20 07:43:11.149064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:53.166 [2024-11-20 07:43:11.156725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eed920 00:28:53.166 [2024-11-20 07:43:11.157467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.166 [2024-11-20 07:43:11.157483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:53.166 [2024-11-20 07:43:11.165140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016eeea00 00:28:53.166 [2024-11-20 07:43:11.165871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.166 [2024-11-20 07:43:11.165886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:53.166 [2024-11-20 07:43:11.173544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee9e10 00:28:53.166 [2024-11-20 07:43:11.174236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.166 [2024-11-20 07:43:11.174251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:53.166 [2024-11-20 07:43:11.181956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee49b0 00:28:53.166 [2024-11-20 07:43:11.182551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.166 [2024-11-20 07:43:11.182566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:53.166 [2024-11-20 07:43:11.190376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016ee0ea0 00:28:53.166 [2024-11-20 07:43:11.191125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.166 [2024-11-20 07:43:11.191140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:53.166 [2024-11-20 07:43:11.198813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efe720 00:28:53.166 [2024-11-20 07:43:11.199552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.166 [2024-11-20 07:43:11.199568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:53.166 30261.00 IOPS, 118.21 MiB/s [2024-11-20T06:43:11.376Z] [2024-11-20 07:43:11.207641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4750) with pdu=0x200016efb480 00:28:53.166 [2024-11-20 07:43:11.208279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.166 [2024-11-20 07:43:11.208294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:53.166 00:28:53.166 Latency(us) 00:28:53.166 [2024-11-20T06:43:11.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.166 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:53.166 nvme0n1 : 2.00 30275.37 118.26 0.00 0.00 4221.74 2034.35 9994.24 00:28:53.166 [2024-11-20T06:43:11.376Z] =================================================================================================================== 00:28:53.166 [2024-11-20T06:43:11.376Z] Total : 30275.37 118.26 0.00 0.00 4221.74 2034.35 9994.24 00:28:53.166 { 00:28:53.166 "results": [ 00:28:53.166 { 00:28:53.166 "job": "nvme0n1", 00:28:53.166 "core_mask": "0x2", 00:28:53.166 "workload": "randwrite", 00:28:53.166 "status": "finished", 00:28:53.166 "queue_depth": 128, 00:28:53.166 "io_size": 4096, 00:28:53.166 "runtime": 2.004864, 00:28:53.166 "iops": 30275.370299431783, 00:28:53.166 "mibps": 118.2631652321554, 00:28:53.166 "io_failed": 0, 00:28:53.166 "io_timeout": 0, 00:28:53.166 "avg_latency_us": 4221.738184454183, 00:28:53.166 "min_latency_us": 2034.3466666666666, 00:28:53.166 "max_latency_us": 9994.24 00:28:53.166 } 00:28:53.166 ], 00:28:53.166 "core_count": 1 00:28:53.166 } 00:28:53.166 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:53.166 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:53.166 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:53.166 | .driver_specific 00:28:53.166 | .nvme_error 00:28:53.166 | .status_code 00:28:53.166 | .command_transient_transport_error' 00:28:53.166 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 238 > 0 )) 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3576271 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3576271 ']' 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3576271 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3576271 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3576271' 00:28:53.427 killing process with pid 3576271 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3576271 00:28:53.427 Received shutdown signal, test time was about 2.000000 seconds 00:28:53.427 00:28:53.427 Latency(us) 00:28:53.427 [2024-11-20T06:43:11.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.427 [2024-11-20T06:43:11.637Z] =================================================================================================================== 00:28:53.427 [2024-11-20T06:43:11.637Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3576271 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3576959 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3576959 /var/tmp/bperf.sock 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3576959 ']' 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:53.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:53.427 07:43:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:53.427 [2024-11-20 07:43:11.626372] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:28:53.427 [2024-11-20 07:43:11.626429] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3576959 ] 00:28:53.427 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:53.427 Zero copy mechanism will not be used. 00:28:53.688 [2024-11-20 07:43:11.712431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.688 [2024-11-20 07:43:11.741811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.260 07:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:54.260 07:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:54.260 07:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:54.260 07:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:54.520 07:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:54.521 07:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.521 07:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:54.521 07:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.521 07:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.521 07:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.781 nvme0n1 00:28:54.781 07:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:54.781 07:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.781 07:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:54.781 07:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.781 07:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:54.781 07:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:54.781 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:54.781 Zero copy mechanism will not be used. 00:28:54.781 Running I/O for 2 seconds... 00:28:54.781 [2024-11-20 07:43:12.944057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:54.781 [2024-11-20 07:43:12.944387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.781 [2024-11-20 07:43:12.944413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:54.781 [2024-11-20 07:43:12.954586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:54.781 [2024-11-20 07:43:12.954885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.781 [2024-11-20 07:43:12.954903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:54.781 [2024-11-20 07:43:12.962125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:54.781 [2024-11-20 07:43:12.962452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.781 [2024-11-20 07:43:12.962470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:54.781 [2024-11-20 07:43:12.970682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:54.781 [2024-11-20 07:43:12.970726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.781 [2024-11-20 07:43:12.970742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:54.781 [2024-11-20 07:43:12.978480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:54.781 [2024-11-20 07:43:12.978548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.781 [2024-11-20 07:43:12.978564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.042 [2024-11-20 07:43:12.986915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.042 [2024-11-20 07:43:12.986975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.042 [2024-11-20 07:43:12.986990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.042 [2024-11-20 07:43:12.995200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.042 [2024-11-20 07:43:12.995265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:12.995284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.003657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.003722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.003738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.011540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.011738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.011758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.017200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.017371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.017387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.026043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.026092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.026107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.034066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.034351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.034368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.041164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.041318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.041334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.051930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.052266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.052282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.058310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.058607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.058624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.066370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.066432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.066447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.070557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.070602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.070618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.074378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.074425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.074440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.079392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.079445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.079460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.083402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.083450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.083466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.087594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.087649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.087664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.091572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.091636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.091651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.095543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.095587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.095602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.099491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.099544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.099558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.103581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.103636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.103651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.106917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.107098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.107113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.110607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.110667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.110683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.114510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.114556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.114572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.117840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.117893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.117908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.121296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.121348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.121363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.124572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.124619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.124635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.128006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.128055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.128070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.043 [2024-11-20 07:43:13.133569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.043 [2024-11-20 07:43:13.133622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.043 [2024-11-20 07:43:13.133643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.140559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.140617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.140632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.144803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.144866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.144881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.148532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.148593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.148608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.151949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.152005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.152019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.155389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.155442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.155457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.158965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.159167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.159183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.163682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.163888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.163903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.170647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.170718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.170733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.177810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.178063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.178080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.184510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.184724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.184739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.191783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.191846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.191861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.195737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.195799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.195814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.199329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.199389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.199405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.202855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.202941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.202956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.206796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.206933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.206949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.210306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.210354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.210369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.214839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.214895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.214911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.218474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.218550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.218565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.226454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.226751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.226768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.236351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.236587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.236602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.044 [2024-11-20 07:43:13.244806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.044 [2024-11-20 07:43:13.244867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.044 [2024-11-20 07:43:13.244882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.305 [2024-11-20 07:43:13.249582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.305 [2024-11-20 07:43:13.249761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.305 [2024-11-20 07:43:13.249776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.305 [2024-11-20 07:43:13.256217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.305 [2024-11-20 07:43:13.256270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.305 [2024-11-20 07:43:13.256285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.264375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.264563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.264579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.269809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.269854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.269869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.276139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.276210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.276228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.279818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.279896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.279911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.283742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.283802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.283817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.287724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.287781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.287797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.291718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.291773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.291788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.295505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.295554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.295570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.299536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.299604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.299619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.303607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.303758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.303773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.308091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.308144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.308159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.311901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.311955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.311970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.315369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.315414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.315429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.319355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.319400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.319415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.324444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.324492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.324507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.328068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.328148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.328163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.332272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.332347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.332363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.337704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.337768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.337783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.341457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.341504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.341519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.345637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.345697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.345712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.349687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.349741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.349761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.353813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.353920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.353935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.360360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.360640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.360654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.370093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.370382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.370399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.375621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.375725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.375740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.379664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.306 [2024-11-20 07:43:13.379722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.306 [2024-11-20 07:43:13.379737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.306 [2024-11-20 07:43:13.383767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.383823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.383838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.307 [2024-11-20 07:43:13.387462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.387505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.387520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.307 [2024-11-20 07:43:13.392041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.392140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.392157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.307 [2024-11-20 07:43:13.398873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.399134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.399149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.307 [2024-11-20 07:43:13.404696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.404768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.404783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.307 [2024-11-20 07:43:13.413199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.413345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.413360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.307 [2024-11-20 07:43:13.422922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.423036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.423052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.307 [2024-11-20 07:43:13.431306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.431406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.431422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.307 [2024-11-20 07:43:13.436516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.436801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.436816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.307 [2024-11-20 07:43:13.446426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.446483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.446498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.307 [2024-11-20 07:43:13.452067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.452360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.452377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.307 [2024-11-20 07:43:13.461740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.461995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.462011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.307 [2024-11-20 07:43:13.467515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.467645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.467660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.307 [2024-11-20 07:43:13.473188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.473438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.473453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.307 [2024-11-20 07:43:13.481528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.481809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.481824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.307 [2024-11-20 07:43:13.491473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.491770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.491787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.307 [2024-11-20 07:43:13.497907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.498025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.498040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.307 [2024-11-20 07:43:13.506560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.307 [2024-11-20 07:43:13.506616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.307 [2024-11-20 07:43:13.506631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.569 [2024-11-20 07:43:13.511104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.569 [2024-11-20 07:43:13.511380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.569 [2024-11-20 07:43:13.511394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.569 [2024-11-20 07:43:13.517772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.569 [2024-11-20 07:43:13.517820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.569 [2024-11-20 07:43:13.517835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.569 [2024-11-20 07:43:13.526160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.569 [2024-11-20 07:43:13.526221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.569 [2024-11-20 07:43:13.526236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.569 [2024-11-20 07:43:13.530957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.569 [2024-11-20 07:43:13.531032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.569 [2024-11-20 07:43:13.531047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.569 [2024-11-20 07:43:13.535777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.569 [2024-11-20 07:43:13.536068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.569 [2024-11-20 07:43:13.536084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.569 [2024-11-20 07:43:13.543498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.569 [2024-11-20 07:43:13.543555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.569 [2024-11-20 07:43:13.543570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.569 [2024-11-20 07:43:13.546877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.569 [2024-11-20 07:43:13.546933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.569 [2024-11-20 07:43:13.546947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.569 [2024-11-20 07:43:13.550165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.569 [2024-11-20 07:43:13.550217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.569 [2024-11-20 07:43:13.550232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.569 [2024-11-20 07:43:13.553866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.569 [2024-11-20 07:43:13.553945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.569 [2024-11-20 07:43:13.553960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.569 [2024-11-20 07:43:13.559968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.569 [2024-11-20 07:43:13.560207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.569 [2024-11-20 07:43:13.560223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.569 [2024-11-20 07:43:13.569318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.569 [2024-11-20 07:43:13.569378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.569 [2024-11-20 07:43:13.569397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.569 [2024-11-20 07:43:13.577446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.569 [2024-11-20 07:43:13.577537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.569 [2024-11-20 07:43:13.577552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.569 [2024-11-20 07:43:13.584437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.569 [2024-11-20 07:43:13.584647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.584662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.588702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.588759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.588775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.594202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.594250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.594265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.597559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.597601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.597617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.601403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.601458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.601473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.605429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.605482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.605497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.609939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.610223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.610238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.615921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.616201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.616217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.621965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.622029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.622044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.627231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.627304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.627319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.634952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.635011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.635026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.638782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.638830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.638845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.642527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.642586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.642601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.647525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.647577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.647592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.651619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.651908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.651924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.657564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.657655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.657669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.661900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.661946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.661961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.665545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.665592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.665607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.668821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.668866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.668881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.671875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.671933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.671948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.674983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.675041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.675056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.678760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.678804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.678819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.681712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.681793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.681808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.685158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.685215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.685230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.691920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.692169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.692187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.695875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.695951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.695966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.699320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.699364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.570 [2024-11-20 07:43:13.699379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.570 [2024-11-20 07:43:13.702308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.570 [2024-11-20 07:43:13.702376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.571 [2024-11-20 07:43:13.702392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.571 [2024-11-20 07:43:13.705480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.571 [2024-11-20 07:43:13.705526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.571 [2024-11-20 07:43:13.705541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.571 [2024-11-20 07:43:13.709128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.571 [2024-11-20 07:43:13.709186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.571 [2024-11-20 07:43:13.709202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.571 [2024-11-20 07:43:13.712077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.571 [2024-11-20 07:43:13.712123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.571 [2024-11-20 07:43:13.712137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.571 [2024-11-20 07:43:13.714835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.571 [2024-11-20 07:43:13.715150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.571 [2024-11-20 07:43:13.715166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.571 [2024-11-20 07:43:13.718412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.571 [2024-11-20 07:43:13.718455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.571 [2024-11-20 07:43:13.718470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.571 [2024-11-20 07:43:13.721197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.571 [2024-11-20 07:43:13.721248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.571 [2024-11-20 07:43:13.721263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.571 [2024-11-20 07:43:13.723869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.571 [2024-11-20 07:43:13.723929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.571 [2024-11-20 07:43:13.723944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.571 [2024-11-20 07:43:13.726545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.571 [2024-11-20 07:43:13.726600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.571 [2024-11-20 07:43:13.726614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.571 [2024-11-20 07:43:13.731909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.571 [2024-11-20 07:43:13.732237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.571 [2024-11-20 07:43:13.732252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.571 [2024-11-20 07:43:13.735635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.571 [2024-11-20 07:43:13.735712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.571 [2024-11-20 07:43:13.735727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.571 [2024-11-20 07:43:13.739311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.571 [2024-11-20 07:43:13.739360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.571 [2024-11-20 07:43:13.739375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.571 [2024-11-20 07:43:13.742130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.571 [2024-11-20 07:43:13.742190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.571 [2024-11-20 07:43:13.742205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.571 [2024-11-20 07:43:13.746754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.571 [2024-11-20 07:43:13.746799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.571 [2024-11-20 07:43:13.746815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.571 [2024-11-20 07:43:13.752615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.571 [2024-11-20 07:43:13.752689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.571 [2024-11-20 07:43:13.752704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.571 [2024-11-20 07:43:13.757053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.571 [2024-11-20 07:43:13.757341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.571 [2024-11-20 07:43:13.757357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.571 [2024-11-20 07:43:13.767341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.571 [2024-11-20 07:43:13.767681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.571 [2024-11-20 07:43:13.767697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.776739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.776982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.776997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.782238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.782320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.782334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.785751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.785857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.785872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.789610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.789664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.789679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.792790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.792869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.792884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.796845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.796894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.796909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.803620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.803667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.803685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.807364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.807411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.807426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.810480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.810524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.810539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.813891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.813958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.813974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.816984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.817033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.817048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.820243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.820287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.820302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.823390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.823441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.823455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.826443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.826508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.826523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.830733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.830806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.830821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.833681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.833755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.833771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.836931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.837016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.837031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.844283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.844352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.844367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.847399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.847481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.847496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.850562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.850615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.850630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.853579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.853642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.853657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.834 [2024-11-20 07:43:13.856421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.834 [2024-11-20 07:43:13.856484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.834 [2024-11-20 07:43:13.856498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.859506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.859572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.859587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.867031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.867111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.867126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.869952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.870005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.870020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.873528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.873639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.873654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.878496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.878539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.878555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.883008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.883057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.883072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.886194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.886235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.886250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.889687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.889734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.889756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.892641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.892708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.892722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.896417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.896494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.896509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.899475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.899531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.899550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.903146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.903207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.903222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.909316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.909604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.909620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.914956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.915016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.915031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.919257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.919488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.919503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.925809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.926053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.926069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.930151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.930223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.930237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.935834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.936061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.936076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.835 6075.00 IOPS, 759.38 MiB/s [2024-11-20T06:43:14.045Z] [2024-11-20 07:43:13.944333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.944584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.944600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.954113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.954368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.954383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.964204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.964452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.964468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.972202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.972296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.972310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.979222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.979275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.979290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.983020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.983088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.983103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.987003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.987055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.987069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.992512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.992557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.992572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.835 [2024-11-20 07:43:13.996100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.835 [2024-11-20 07:43:13.996165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.835 [2024-11-20 07:43:13.996180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.836 [2024-11-20 07:43:13.999362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.836 [2024-11-20 07:43:13.999415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.836 [2024-11-20 07:43:13.999430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.836 [2024-11-20 07:43:14.003633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.836 [2024-11-20 07:43:14.003697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.836 [2024-11-20 07:43:14.003712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.836 [2024-11-20 07:43:14.008392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.836 [2024-11-20 07:43:14.008447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.836 [2024-11-20 07:43:14.008462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.836 [2024-11-20 07:43:14.012038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.836 [2024-11-20 07:43:14.012084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.836 [2024-11-20 07:43:14.012099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:55.836 [2024-11-20 07:43:14.015700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.836 [2024-11-20 07:43:14.015749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.836 [2024-11-20 07:43:14.015764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:55.836 [2024-11-20 07:43:14.019927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.836 [2024-11-20 07:43:14.020177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.836 [2024-11-20 07:43:14.020191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:55.836 [2024-11-20 07:43:14.028094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.836 [2024-11-20 07:43:14.028140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.836 [2024-11-20 07:43:14.028155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:55.836 [2024-11-20 07:43:14.032335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:55.836 [2024-11-20 07:43:14.032396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.836 [2024-11-20 07:43:14.032411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.097 [2024-11-20 07:43:14.038473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.097 [2024-11-20 07:43:14.038549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.097 [2024-11-20 07:43:14.038564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.097 [2024-11-20 07:43:14.042559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.097 [2024-11-20 07:43:14.042604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.097 [2024-11-20 07:43:14.042622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.097 [2024-11-20 07:43:14.046218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.097 [2024-11-20 07:43:14.046270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.097 [2024-11-20 07:43:14.046286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.097 [2024-11-20 07:43:14.050924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.097 [2024-11-20 07:43:14.051016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.097 [2024-11-20 07:43:14.051031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.097 [2024-11-20 07:43:14.054592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.097 [2024-11-20 07:43:14.054643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.097 [2024-11-20 07:43:14.054658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.097 [2024-11-20 07:43:14.057956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.097 [2024-11-20 07:43:14.058004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.097 [2024-11-20 07:43:14.058020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.097 [2024-11-20 07:43:14.061770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.097 [2024-11-20 07:43:14.061886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.097 [2024-11-20 07:43:14.061900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.097 [2024-11-20 07:43:14.066412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.097 [2024-11-20 07:43:14.066505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.097 [2024-11-20 07:43:14.066520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.097 [2024-11-20 07:43:14.074561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.097 [2024-11-20 07:43:14.074687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.097 [2024-11-20 07:43:14.074702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.097 [2024-11-20 07:43:14.084270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.097 [2024-11-20 07:43:14.084497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.097 [2024-11-20 07:43:14.084512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.097 [2024-11-20 07:43:14.095169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.097 [2024-11-20 07:43:14.095463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.095480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.106106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.106419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.106435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.115674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.115921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.115936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.125795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.126129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.126144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.132955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.133005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.133020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.136223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.136277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.136293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.139215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.139259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.139274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.142509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.142718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.142732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.147203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.147268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.147283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.150686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.150740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.150761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.153783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.153835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.153850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.156786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.156862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.156877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.162006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.162070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.162085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.167428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.167483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.167498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.172190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.172248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.172263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.175619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.175948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.175963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.181659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.181737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.181757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.184949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.185023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.185042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.188796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.188848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.188863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.191995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.192080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.192094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.195096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.195152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.195167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.198333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.198398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.198412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.202910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.202983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.202998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.206331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.206457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.206472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.210799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.210879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.210895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.213845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.213967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.213982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.217884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.218078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.218094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.227413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.227492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.227508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.230890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.230961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.230976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.233847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.233943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.233958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.239596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.239685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.239701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.246313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.246423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.246439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.254413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.254735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.254756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.263221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.263468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.263483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.272044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.272342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.272358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.278092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.278332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.278348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.287449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.287754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.287772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.098 [2024-11-20 07:43:14.297516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.098 [2024-11-20 07:43:14.297757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.098 [2024-11-20 07:43:14.297773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.359 [2024-11-20 07:43:14.305711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.359 [2024-11-20 07:43:14.306033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.359 [2024-11-20 07:43:14.306050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.359 [2024-11-20 07:43:14.313276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.359 [2024-11-20 07:43:14.313491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.359 [2024-11-20 07:43:14.313507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.359 [2024-11-20 07:43:14.320250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.359 [2024-11-20 07:43:14.320606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.359 [2024-11-20 07:43:14.320622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.359 [2024-11-20 07:43:14.329141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.359 [2024-11-20 07:43:14.329366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.359 [2024-11-20 07:43:14.329382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.359 [2024-11-20 07:43:14.339338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.359 [2024-11-20 07:43:14.339621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.359 [2024-11-20 07:43:14.339638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.359 [2024-11-20 07:43:14.345680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.359 [2024-11-20 07:43:14.345971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.359 [2024-11-20 07:43:14.345990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.359 [2024-11-20 07:43:14.351611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.359 [2024-11-20 07:43:14.351965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.359 [2024-11-20 07:43:14.351981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.359 [2024-11-20 07:43:14.360273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.359 [2024-11-20 07:43:14.360554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.359 [2024-11-20 07:43:14.360576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.359 [2024-11-20 07:43:14.368406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.359 [2024-11-20 07:43:14.368699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.359 [2024-11-20 07:43:14.368715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.359 [2024-11-20 07:43:14.374799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.359 [2024-11-20 07:43:14.375051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.359 [2024-11-20 07:43:14.375066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.359 [2024-11-20 07:43:14.382511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.359 [2024-11-20 07:43:14.382762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.382778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.388905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.389156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.389172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.395465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.395629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.395644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.403983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.404288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.404304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.410181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.410345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.410361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.416093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.416461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.416478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.423985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.424271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.424287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.429633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.429788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.429803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.436165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.436434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.436452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.444305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.444637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.444654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.452495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.452634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.452649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.457504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.457640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.457657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.463289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.463427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.463443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.471971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.472123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.472139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.475097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.475229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.475245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.478094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.478221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.478236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.483572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.483911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.483927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.493280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.493423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.493439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.501223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.501451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.501467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.507066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.507302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.507317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.516352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.516628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.516645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.525553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.525757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.525776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.530941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.531059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.531075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.533655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.533781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.533797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.360 [2024-11-20 07:43:14.538178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.360 [2024-11-20 07:43:14.538407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.360 [2024-11-20 07:43:14.538423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.361 [2024-11-20 07:43:14.543401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.361 [2024-11-20 07:43:14.543527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.361 [2024-11-20 07:43:14.543542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.361 [2024-11-20 07:43:14.546261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.361 [2024-11-20 07:43:14.546384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.361 [2024-11-20 07:43:14.546400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.361 [2024-11-20 07:43:14.551679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.361 [2024-11-20 07:43:14.551868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.361 [2024-11-20 07:43:14.551884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.361 [2024-11-20 07:43:14.558600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.361 [2024-11-20 07:43:14.558973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.361 [2024-11-20 07:43:14.558989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.361 [2024-11-20 07:43:14.562762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.361 [2024-11-20 07:43:14.563037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.361 [2024-11-20 07:43:14.563053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.621 [2024-11-20 07:43:14.567036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.621 [2024-11-20 07:43:14.567160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.621 [2024-11-20 07:43:14.567177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.621 [2024-11-20 07:43:14.569867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.569988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.570004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.572611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.572729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.572750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.576080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.576199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.576214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.581261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.581468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.581483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.586432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.586554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.586571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.589484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.589602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.589619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.592403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.592524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.592539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.596067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.596184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.596200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.599088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.599207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.599223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.602485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.602610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.602626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.605108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.605227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.605243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.607604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.607726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.607742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.610116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.610240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.610256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.612719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.612848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.612864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.615399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.615520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.615536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.618002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.618125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.618141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.620562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.620678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.620697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.623055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.623173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.623189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.625530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.625648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.625664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.627993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.628115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.628131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.630471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.630592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.630607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.632953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.633069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.633084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.635422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.635538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.635554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.637923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.638042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.638058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.640844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.640991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.641007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.643788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.643910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.643926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.646271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.646391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.646407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.648767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.648887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.648903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.651249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.651370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.651386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.653729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.653877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.653893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.656981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.657121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.657137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.665934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.666258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.666274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.676320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.676623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.676639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.686472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.686705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.686720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.696526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.696687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.696702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.706867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.707091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.707107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.717028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.717279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.717295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.727836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.728098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.728114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.738450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.738821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.738838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.748914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.749172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.749187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.759725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.759962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.759978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.770744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.771066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.771082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.781273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.781476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.781494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.792110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.792397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.792414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.802229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.802484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.802500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.812703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.812971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.812988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.622 [2024-11-20 07:43:14.822721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.622 [2024-11-20 07:43:14.823032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.622 [2024-11-20 07:43:14.823048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.883 [2024-11-20 07:43:14.833142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.883 [2024-11-20 07:43:14.833405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.883 [2024-11-20 07:43:14.833421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.883 [2024-11-20 07:43:14.843657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.883 [2024-11-20 07:43:14.843805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.883 [2024-11-20 07:43:14.843821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.883 [2024-11-20 07:43:14.854276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.883 [2024-11-20 07:43:14.854588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.883 [2024-11-20 07:43:14.854604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.883 [2024-11-20 07:43:14.864632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.883 [2024-11-20 07:43:14.864894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.883 [2024-11-20 07:43:14.864910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.883 [2024-11-20 07:43:14.875000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.883 [2024-11-20 07:43:14.875290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.883 [2024-11-20 07:43:14.875306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.883 [2024-11-20 07:43:14.885466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.883 [2024-11-20 07:43:14.885729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.883 [2024-11-20 07:43:14.885743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.883 [2024-11-20 07:43:14.894515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.883 [2024-11-20 07:43:14.894871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.883 [2024-11-20 07:43:14.894887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.883 [2024-11-20 07:43:14.903328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.883 [2024-11-20 07:43:14.903555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.883 [2024-11-20 07:43:14.903571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.883 [2024-11-20 07:43:14.913335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.883 [2024-11-20 07:43:14.913662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.883 [2024-11-20 07:43:14.913678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:56.883 [2024-11-20 07:43:14.922861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.883 [2024-11-20 07:43:14.923060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.883 [2024-11-20 07:43:14.923076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:56.883 [2024-11-20 07:43:14.933137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.883 [2024-11-20 07:43:14.933191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.883 [2024-11-20 07:43:14.933206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:56.883 5586.50 IOPS, 698.31 MiB/s [2024-11-20T06:43:15.093Z] [2024-11-20 07:43:14.943463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12a4a90) with pdu=0x200016eff3c8 00:28:56.883 [2024-11-20 07:43:14.943702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.883 [2024-11-20 07:43:14.943717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:56.883 00:28:56.883 Latency(us) 00:28:56.883 [2024-11-20T06:43:15.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.883 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:56.883 nvme0n1 : 2.01 5579.33 697.42 0.00 0.00 2861.97 1181.01 11523.41 00:28:56.883 [2024-11-20T06:43:15.093Z] =================================================================================================================== 00:28:56.883 [2024-11-20T06:43:15.093Z] Total : 5579.33 697.42 0.00 0.00 2861.97 1181.01 11523.41 00:28:56.883 { 00:28:56.883 "results": [ 00:28:56.883 { 00:28:56.883 "job": "nvme0n1", 00:28:56.883 "core_mask": "0x2", 00:28:56.883 "workload": "randwrite", 00:28:56.883 "status": "finished", 00:28:56.883 "queue_depth": 16, 00:28:56.883 "io_size": 131072, 00:28:56.883 "runtime": 2.006156, 00:28:56.883 "iops": 5579.32683201107, 00:28:56.883 "mibps": 697.4158540013838, 00:28:56.883 "io_failed": 0, 00:28:56.883 "io_timeout": 0, 00:28:56.883 "avg_latency_us": 2861.9673200512225, 00:28:56.883 "min_latency_us": 1181.0133333333333, 00:28:56.883 "max_latency_us": 11523.413333333334 00:28:56.883 } 00:28:56.883 ], 00:28:56.883 "core_count": 1 00:28:56.883 } 00:28:56.883 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:56.883 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:56.883 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:56.883 | .driver_specific 00:28:56.883 | .nvme_error 00:28:56.883 | .status_code 00:28:56.883 | .command_transient_transport_error' 00:28:56.883 07:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 361 > 0 )) 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3576959 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3576959 ']' 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3576959 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3576959 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3576959' 00:28:57.144 killing process with pid 3576959 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3576959 00:28:57.144 Received shutdown signal, test time was about 2.000000 seconds 00:28:57.144 00:28:57.144 Latency(us) 00:28:57.144 [2024-11-20T06:43:15.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.144 [2024-11-20T06:43:15.354Z] =================================================================================================================== 00:28:57.144 [2024-11-20T06:43:15.354Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3576959 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3574557 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3574557 ']' 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3574557 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:57.144 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3574557 00:28:57.404 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:57.404 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:57.404 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3574557' 00:28:57.404 killing process with pid 3574557 00:28:57.404 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3574557 00:28:57.404 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3574557 00:28:57.404 00:28:57.404 real 0m16.260s 00:28:57.404 user 0m32.258s 00:28:57.404 sys 0m3.542s 00:28:57.404 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:57.404 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.404 ************************************ 00:28:57.404 END TEST nvmf_digest_error 00:28:57.404 ************************************ 00:28:57.404 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:57.404 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:57.404 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.404 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:57.404 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.404 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:57.404 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.404 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.404 rmmod nvme_tcp 00:28:57.404 rmmod nvme_fabrics 00:28:57.404 rmmod nvme_keyring 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3574557 ']' 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3574557 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 3574557 ']' 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 3574557 00:28:57.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3574557) - No such process 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 3574557 is not found' 00:28:57.665 Process with pid 3574557 is not found 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.665 07:43:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.580 07:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:59.580 00:28:59.580 real 0m43.261s 00:28:59.580 user 1m7.704s 00:28:59.580 sys 0m13.222s 00:28:59.580 07:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:59.580 07:43:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:59.580 ************************************ 00:28:59.580 END TEST nvmf_digest 00:28:59.580 ************************************ 00:28:59.580 07:43:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:59.580 07:43:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:59.580 07:43:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:59.580 07:43:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:59.580 07:43:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:59.580 07:43:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:59.580 07:43:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.841 ************************************ 00:28:59.841 START TEST nvmf_bdevperf 00:28:59.841 ************************************ 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:59.841 * Looking for test storage... 00:28:59.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:59.841 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:59.842 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.842 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:59.842 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.842 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.842 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.842 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:59.842 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.842 07:43:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:59.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.842 --rc genhtml_branch_coverage=1 00:28:59.842 --rc genhtml_function_coverage=1 00:28:59.842 --rc genhtml_legend=1 00:28:59.842 --rc geninfo_all_blocks=1 00:28:59.842 --rc geninfo_unexecuted_blocks=1 00:28:59.842 00:28:59.842 ' 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:59.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.842 --rc genhtml_branch_coverage=1 00:28:59.842 --rc genhtml_function_coverage=1 00:28:59.842 --rc genhtml_legend=1 00:28:59.842 --rc geninfo_all_blocks=1 00:28:59.842 --rc geninfo_unexecuted_blocks=1 00:28:59.842 00:28:59.842 ' 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:59.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.842 --rc genhtml_branch_coverage=1 00:28:59.842 --rc genhtml_function_coverage=1 00:28:59.842 --rc genhtml_legend=1 00:28:59.842 --rc geninfo_all_blocks=1 00:28:59.842 --rc geninfo_unexecuted_blocks=1 00:28:59.842 00:28:59.842 ' 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:59.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.842 --rc genhtml_branch_coverage=1 00:28:59.842 --rc genhtml_function_coverage=1 00:28:59.842 --rc genhtml_legend=1 00:28:59.842 --rc geninfo_all_blocks=1 00:28:59.842 --rc geninfo_unexecuted_blocks=1 00:28:59.842 00:28:59.842 ' 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:59.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.842 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.103 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:00.103 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:00.103 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.103 07:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:08.250 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:08.250 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:08.250 Found net devices under 0000:31:00.0: cvl_0_0 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:08.250 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:08.251 Found net devices under 0000:31:00.1: cvl_0_1 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:08.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:29:08.251 00:29:08.251 --- 10.0.0.2 ping statistics --- 00:29:08.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.251 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:08.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:29:08.251 00:29:08.251 --- 10.0.0.1 ping statistics --- 00:29:08.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.251 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3582010 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3582010 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3582010 ']' 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:08.251 07:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.251 [2024-11-20 07:43:25.725303] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:29:08.251 [2024-11-20 07:43:25.725370] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.251 [2024-11-20 07:43:25.824757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:08.251 [2024-11-20 07:43:25.876801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.251 [2024-11-20 07:43:25.876851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.251 [2024-11-20 07:43:25.876861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.251 [2024-11-20 07:43:25.876868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.251 [2024-11-20 07:43:25.876875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.251 [2024-11-20 07:43:25.879002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:08.251 [2024-11-20 07:43:25.879160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.251 [2024-11-20 07:43:25.879160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.513 [2024-11-20 07:43:26.612097] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.513 Malloc0 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.513 [2024-11-20 07:43:26.690030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.513 { 00:29:08.513 "params": { 00:29:08.513 "name": "Nvme$subsystem", 00:29:08.513 "trtype": "$TEST_TRANSPORT", 00:29:08.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.513 "adrfam": "ipv4", 00:29:08.513 "trsvcid": "$NVMF_PORT", 00:29:08.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.513 "hdgst": ${hdgst:-false}, 00:29:08.513 "ddgst": ${ddgst:-false} 00:29:08.513 }, 00:29:08.513 "method": "bdev_nvme_attach_controller" 00:29:08.513 } 00:29:08.513 EOF 00:29:08.513 )") 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:08.513 07:43:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:08.513 "params": { 00:29:08.513 "name": "Nvme1", 00:29:08.513 "trtype": "tcp", 00:29:08.513 "traddr": "10.0.0.2", 00:29:08.513 "adrfam": "ipv4", 00:29:08.513 "trsvcid": "4420", 00:29:08.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:08.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:08.513 "hdgst": false, 00:29:08.513 "ddgst": false 00:29:08.513 }, 00:29:08.513 "method": "bdev_nvme_attach_controller" 00:29:08.513 }' 00:29:08.774 [2024-11-20 07:43:26.759872] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:29:08.774 [2024-11-20 07:43:26.759945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582146 ] 00:29:08.774 [2024-11-20 07:43:26.855349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.774 [2024-11-20 07:43:26.908165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.036 Running I/O for 1 seconds... 00:29:09.978 9433.00 IOPS, 36.85 MiB/s 00:29:09.979 Latency(us) 00:29:09.979 [2024-11-20T06:43:28.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.979 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:09.979 Verification LBA range: start 0x0 length 0x4000 00:29:09.979 Nvme1n1 : 1.01 9501.49 37.12 0.00 0.00 13397.46 1460.91 13707.95 00:29:09.979 [2024-11-20T06:43:28.189Z] =================================================================================================================== 00:29:09.979 [2024-11-20T06:43:28.189Z] Total : 9501.49 37.12 0.00 0.00 13397.46 1460.91 13707.95 00:29:10.240 07:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3582391 00:29:10.240 07:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:10.240 07:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:10.240 07:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:10.240 07:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:10.240 07:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:10.240 07:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:10.240 07:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:10.240 { 00:29:10.240 "params": { 00:29:10.240 "name": "Nvme$subsystem", 00:29:10.240 "trtype": "$TEST_TRANSPORT", 00:29:10.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.240 "adrfam": "ipv4", 00:29:10.240 "trsvcid": "$NVMF_PORT", 00:29:10.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.240 "hdgst": ${hdgst:-false}, 00:29:10.240 "ddgst": ${ddgst:-false} 00:29:10.240 }, 00:29:10.240 "method": "bdev_nvme_attach_controller" 00:29:10.240 } 00:29:10.240 EOF 00:29:10.240 )") 00:29:10.240 07:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:10.240 07:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:10.240 07:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:10.240 07:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:10.240 "params": { 00:29:10.240 "name": "Nvme1", 00:29:10.240 "trtype": "tcp", 00:29:10.240 "traddr": "10.0.0.2", 00:29:10.240 "adrfam": "ipv4", 00:29:10.240 "trsvcid": "4420", 00:29:10.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:10.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:10.240 "hdgst": false, 00:29:10.240 "ddgst": false 00:29:10.240 }, 00:29:10.240 "method": "bdev_nvme_attach_controller" 00:29:10.240 }' 00:29:10.240 [2024-11-20 07:43:28.300527] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:29:10.240 [2024-11-20 07:43:28.300581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582391 ] 00:29:10.240 [2024-11-20 07:43:28.389366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.240 [2024-11-20 07:43:28.424493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.501 Running I/O for 15 seconds... 00:29:12.387 10254.00 IOPS, 40.05 MiB/s [2024-11-20T06:43:31.542Z] 10409.50 IOPS, 40.66 MiB/s [2024-11-20T06:43:31.542Z] 07:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3582010 00:29:13.332 07:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:13.332 [2024-11-20 07:43:31.273889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.273931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.273953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.273961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.273971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.273979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.273989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.274002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.274015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.274024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.274034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.274043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.274055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.274065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.274077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.274085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.274095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.274103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.274116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.274125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.274134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.274143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.274156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.274166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.274178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.274187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.274197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.274207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.274216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.274228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.274238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.274245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.274258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.274268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.274279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.274288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.332 [2024-11-20 07:43:31.274297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.332 [2024-11-20 07:43:31.274306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.274972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.274984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.275001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.275014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.275029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.275038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.275054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.275065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.275078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.333 [2024-11-20 07:43:31.275089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.333 [2024-11-20 07:43:31.275099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.334 [2024-11-20 07:43:31.275817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.334 [2024-11-20 07:43:31.275825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.275834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.275842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.275851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.275858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.275868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.275875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.275885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.275892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.275902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.275909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.275918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.275926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.275936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.275945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.275955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.275962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.275971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.275979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.275989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.275996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.335 [2024-11-20 07:43:31.276317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11de550 is same with the state(6) to be set 00:29:13.335 [2024-11-20 07:43:31.276336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:13.335 [2024-11-20 07:43:31.276343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:13.335 [2024-11-20 07:43:31.276349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92672 len:8 PRP1 0x0 PRP2 0x0 00:29:13.335 [2024-11-20 07:43:31.276358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.335 [2024-11-20 07:43:31.276448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.335 [2024-11-20 07:43:31.276465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.335 [2024-11-20 07:43:31.276481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:13.335 [2024-11-20 07:43:31.276496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.335 [2024-11-20 07:43:31.276504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.335 [2024-11-20 07:43:31.280117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.335 [2024-11-20 07:43:31.280138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.335 [2024-11-20 07:43:31.281044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 07:43:31.281083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.335 [2024-11-20 07:43:31.281096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.335 [2024-11-20 07:43:31.281336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.335 [2024-11-20 07:43:31.281557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.335 [2024-11-20 07:43:31.281567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.335 [2024-11-20 07:43:31.281576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.336 [2024-11-20 07:43:31.281586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.336 [2024-11-20 07:43:31.294232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.336 [2024-11-20 07:43:31.294861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 07:43:31.294901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.336 [2024-11-20 07:43:31.294914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.336 [2024-11-20 07:43:31.295155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.336 [2024-11-20 07:43:31.295378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.336 [2024-11-20 07:43:31.295387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.336 [2024-11-20 07:43:31.295396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.336 [2024-11-20 07:43:31.295405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.336 [2024-11-20 07:43:31.308039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.336 [2024-11-20 07:43:31.308638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 07:43:31.308658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.336 [2024-11-20 07:43:31.308666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.336 [2024-11-20 07:43:31.308890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.336 [2024-11-20 07:43:31.309109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.336 [2024-11-20 07:43:31.309118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.336 [2024-11-20 07:43:31.309127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.336 [2024-11-20 07:43:31.309134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.336 [2024-11-20 07:43:31.321939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.336 [2024-11-20 07:43:31.322519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 07:43:31.322560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.336 [2024-11-20 07:43:31.322571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.336 [2024-11-20 07:43:31.322820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.336 [2024-11-20 07:43:31.323043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.336 [2024-11-20 07:43:31.323052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.336 [2024-11-20 07:43:31.323061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.336 [2024-11-20 07:43:31.323069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.336 [2024-11-20 07:43:31.335886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.336 [2024-11-20 07:43:31.336478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 07:43:31.336498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.336 [2024-11-20 07:43:31.336506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.336 [2024-11-20 07:43:31.336723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.336 [2024-11-20 07:43:31.336950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.336 [2024-11-20 07:43:31.336960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.336 [2024-11-20 07:43:31.336967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.336 [2024-11-20 07:43:31.336975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.336 [2024-11-20 07:43:31.349762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.336 [2024-11-20 07:43:31.350398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 07:43:31.350442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.336 [2024-11-20 07:43:31.350454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.336 [2024-11-20 07:43:31.350700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.336 [2024-11-20 07:43:31.350933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.336 [2024-11-20 07:43:31.350943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.336 [2024-11-20 07:43:31.350952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.336 [2024-11-20 07:43:31.350960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.336 [2024-11-20 07:43:31.363567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.336 [2024-11-20 07:43:31.364134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 07:43:31.364156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.336 [2024-11-20 07:43:31.364165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.336 [2024-11-20 07:43:31.364382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.336 [2024-11-20 07:43:31.364600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.336 [2024-11-20 07:43:31.364610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.336 [2024-11-20 07:43:31.364618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.336 [2024-11-20 07:43:31.364626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.336 [2024-11-20 07:43:31.377441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.336 [2024-11-20 07:43:31.378078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 07:43:31.378125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.336 [2024-11-20 07:43:31.378137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.336 [2024-11-20 07:43:31.378380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.336 [2024-11-20 07:43:31.378602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.336 [2024-11-20 07:43:31.378611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.336 [2024-11-20 07:43:31.378619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.336 [2024-11-20 07:43:31.378628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.336 [2024-11-20 07:43:31.391259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.336 [2024-11-20 07:43:31.391954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 07:43:31.392003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.336 [2024-11-20 07:43:31.392014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.336 [2024-11-20 07:43:31.392257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.336 [2024-11-20 07:43:31.392480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.336 [2024-11-20 07:43:31.392495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.336 [2024-11-20 07:43:31.392504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.336 [2024-11-20 07:43:31.392512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.336 [2024-11-20 07:43:31.405148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.336 [2024-11-20 07:43:31.405852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 07:43:31.405904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.336 [2024-11-20 07:43:31.405916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.336 [2024-11-20 07:43:31.406161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.336 [2024-11-20 07:43:31.406385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.337 [2024-11-20 07:43:31.406394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.337 [2024-11-20 07:43:31.406402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.337 [2024-11-20 07:43:31.406411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.337 [2024-11-20 07:43:31.419020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.337 [2024-11-20 07:43:31.419640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 07:43:31.419666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.337 [2024-11-20 07:43:31.419675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.337 [2024-11-20 07:43:31.419961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.337 [2024-11-20 07:43:31.420182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.337 [2024-11-20 07:43:31.420191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.337 [2024-11-20 07:43:31.420198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.337 [2024-11-20 07:43:31.420206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.337 [2024-11-20 07:43:31.432826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.337 [2024-11-20 07:43:31.433514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 07:43:31.433572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.337 [2024-11-20 07:43:31.433584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.337 [2024-11-20 07:43:31.433849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.337 [2024-11-20 07:43:31.434074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.337 [2024-11-20 07:43:31.434083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.337 [2024-11-20 07:43:31.434092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.337 [2024-11-20 07:43:31.434107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.337 [2024-11-20 07:43:31.446720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.337 [2024-11-20 07:43:31.447315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 07:43:31.447347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.337 [2024-11-20 07:43:31.447356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.337 [2024-11-20 07:43:31.447576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.337 [2024-11-20 07:43:31.447804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.337 [2024-11-20 07:43:31.447814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.337 [2024-11-20 07:43:31.447822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.337 [2024-11-20 07:43:31.447831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.337 [2024-11-20 07:43:31.460643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.337 [2024-11-20 07:43:31.461214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 07:43:31.461239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.337 [2024-11-20 07:43:31.461247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.337 [2024-11-20 07:43:31.461465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.337 [2024-11-20 07:43:31.461684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.337 [2024-11-20 07:43:31.461693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.337 [2024-11-20 07:43:31.461701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.337 [2024-11-20 07:43:31.461708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.337 [2024-11-20 07:43:31.474524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.337 [2024-11-20 07:43:31.475118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 07:43:31.475143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.337 [2024-11-20 07:43:31.475151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.337 [2024-11-20 07:43:31.475369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.337 [2024-11-20 07:43:31.475588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.337 [2024-11-20 07:43:31.475597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.337 [2024-11-20 07:43:31.475605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.337 [2024-11-20 07:43:31.475613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.337 [2024-11-20 07:43:31.488434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.337 [2024-11-20 07:43:31.489103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 07:43:31.489165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.337 [2024-11-20 07:43:31.489178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.337 [2024-11-20 07:43:31.489431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.337 [2024-11-20 07:43:31.489656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.337 [2024-11-20 07:43:31.489665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.337 [2024-11-20 07:43:31.489674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.337 [2024-11-20 07:43:31.489683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.337 [2024-11-20 07:43:31.502316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.337 [2024-11-20 07:43:31.503049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 07:43:31.503112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.337 [2024-11-20 07:43:31.503125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.337 [2024-11-20 07:43:31.503378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.337 [2024-11-20 07:43:31.503603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.337 [2024-11-20 07:43:31.503613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.337 [2024-11-20 07:43:31.503621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.337 [2024-11-20 07:43:31.503630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.337 [2024-11-20 07:43:31.516284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.337 [2024-11-20 07:43:31.516984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 07:43:31.517045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.337 [2024-11-20 07:43:31.517058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.337 [2024-11-20 07:43:31.517311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.337 [2024-11-20 07:43:31.517536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.337 [2024-11-20 07:43:31.517545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.337 [2024-11-20 07:43:31.517554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.337 [2024-11-20 07:43:31.517563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.337 [2024-11-20 07:43:31.530200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.337 [2024-11-20 07:43:31.530864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 07:43:31.530927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.337 [2024-11-20 07:43:31.530941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.337 [2024-11-20 07:43:31.531202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.337 [2024-11-20 07:43:31.531427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.337 [2024-11-20 07:43:31.531440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.337 [2024-11-20 07:43:31.531448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.337 [2024-11-20 07:43:31.531458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.600 [2024-11-20 07:43:31.544158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.600 [2024-11-20 07:43:31.544773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-11-20 07:43:31.544836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.600 [2024-11-20 07:43:31.544851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.600 [2024-11-20 07:43:31.545104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.600 [2024-11-20 07:43:31.545329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.600 [2024-11-20 07:43:31.545338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.600 [2024-11-20 07:43:31.545348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.600 [2024-11-20 07:43:31.545357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.600 [2024-11-20 07:43:31.558010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.600 [2024-11-20 07:43:31.558646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-11-20 07:43:31.558675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.600 [2024-11-20 07:43:31.558684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.600 [2024-11-20 07:43:31.558917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.600 [2024-11-20 07:43:31.559138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.600 [2024-11-20 07:43:31.559147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.600 [2024-11-20 07:43:31.559155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.600 [2024-11-20 07:43:31.559163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.600 [2024-11-20 07:43:31.571792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.600 [2024-11-20 07:43:31.572465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-11-20 07:43:31.572527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.600 [2024-11-20 07:43:31.572540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.600 [2024-11-20 07:43:31.572809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.600 [2024-11-20 07:43:31.573035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.600 [2024-11-20 07:43:31.573051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.600 [2024-11-20 07:43:31.573060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.600 [2024-11-20 07:43:31.573069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.600 [2024-11-20 07:43:31.585724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.600 [2024-11-20 07:43:31.586324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-11-20 07:43:31.586353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.600 [2024-11-20 07:43:31.586362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.600 [2024-11-20 07:43:31.586582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.600 [2024-11-20 07:43:31.586812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.600 [2024-11-20 07:43:31.586824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.600 [2024-11-20 07:43:31.586832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.600 [2024-11-20 07:43:31.586840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.600 9280.33 IOPS, 36.25 MiB/s [2024-11-20T06:43:31.810Z] [2024-11-20 07:43:31.599543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.600 [2024-11-20 07:43:31.600138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-11-20 07:43:31.600166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.600 [2024-11-20 07:43:31.600175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.600 [2024-11-20 07:43:31.600395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.600 [2024-11-20 07:43:31.600616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.600 [2024-11-20 07:43:31.600627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.600 [2024-11-20 07:43:31.600636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.601 [2024-11-20 07:43:31.600644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.601 [2024-11-20 07:43:31.613343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.601 [2024-11-20 07:43:31.614038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-11-20 07:43:31.614100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.601 [2024-11-20 07:43:31.614113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.601 [2024-11-20 07:43:31.614366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.601 [2024-11-20 07:43:31.614591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.601 [2024-11-20 07:43:31.614601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.601 [2024-11-20 07:43:31.614610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.601 [2024-11-20 07:43:31.614625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.601 [2024-11-20 07:43:31.627273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.601 [2024-11-20 07:43:31.628005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-11-20 07:43:31.628067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.601 [2024-11-20 07:43:31.628080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.601 [2024-11-20 07:43:31.628333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.601 [2024-11-20 07:43:31.628559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.601 [2024-11-20 07:43:31.628568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.601 [2024-11-20 07:43:31.628577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.601 [2024-11-20 07:43:31.628586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.601 [2024-11-20 07:43:31.641246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.601 [2024-11-20 07:43:31.641978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-11-20 07:43:31.642040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.601 [2024-11-20 07:43:31.642053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.601 [2024-11-20 07:43:31.642306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.601 [2024-11-20 07:43:31.642532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.601 [2024-11-20 07:43:31.642542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.601 [2024-11-20 07:43:31.642551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.601 [2024-11-20 07:43:31.642560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.601 [2024-11-20 07:43:31.655197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.601 [2024-11-20 07:43:31.655883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-11-20 07:43:31.655944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.601 [2024-11-20 07:43:31.655957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.601 [2024-11-20 07:43:31.656210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.601 [2024-11-20 07:43:31.656435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.601 [2024-11-20 07:43:31.656445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.601 [2024-11-20 07:43:31.656454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.601 [2024-11-20 07:43:31.656463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.601 [2024-11-20 07:43:31.669097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.601 [2024-11-20 07:43:31.669784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-11-20 07:43:31.669846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.601 [2024-11-20 07:43:31.669859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.601 [2024-11-20 07:43:31.670112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.601 [2024-11-20 07:43:31.670337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.601 [2024-11-20 07:43:31.670347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.601 [2024-11-20 07:43:31.670356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.601 [2024-11-20 07:43:31.670365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.601 [2024-11-20 07:43:31.683010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.601 [2024-11-20 07:43:31.683732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-11-20 07:43:31.683805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.601 [2024-11-20 07:43:31.683829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.601 [2024-11-20 07:43:31.684082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.601 [2024-11-20 07:43:31.684307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.601 [2024-11-20 07:43:31.684316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.601 [2024-11-20 07:43:31.684325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.601 [2024-11-20 07:43:31.684333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.601 [2024-11-20 07:43:31.696774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.601 [2024-11-20 07:43:31.697449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-11-20 07:43:31.697512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.601 [2024-11-20 07:43:31.697525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.601 [2024-11-20 07:43:31.697792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.601 [2024-11-20 07:43:31.698018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.601 [2024-11-20 07:43:31.698028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.601 [2024-11-20 07:43:31.698036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.601 [2024-11-20 07:43:31.698045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.601 [2024-11-20 07:43:31.710681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.601 [2024-11-20 07:43:31.711413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-11-20 07:43:31.711475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.601 [2024-11-20 07:43:31.711488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.601 [2024-11-20 07:43:31.711765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.601 [2024-11-20 07:43:31.711991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.601 [2024-11-20 07:43:31.712001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.601 [2024-11-20 07:43:31.712009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.601 [2024-11-20 07:43:31.712019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.601 [2024-11-20 07:43:31.724627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.601 [2024-11-20 07:43:31.725336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-11-20 07:43:31.725397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.601 [2024-11-20 07:43:31.725410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.601 [2024-11-20 07:43:31.725662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.601 [2024-11-20 07:43:31.725902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.601 [2024-11-20 07:43:31.725913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.601 [2024-11-20 07:43:31.725922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.601 [2024-11-20 07:43:31.725931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.601 [2024-11-20 07:43:31.738580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.601 [2024-11-20 07:43:31.739188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-11-20 07:43:31.739220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.601 [2024-11-20 07:43:31.739230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.601 [2024-11-20 07:43:31.739449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.601 [2024-11-20 07:43:31.739668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.601 [2024-11-20 07:43:31.739678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.602 [2024-11-20 07:43:31.739687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.602 [2024-11-20 07:43:31.739695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.602 [2024-11-20 07:43:31.752418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.602 [2024-11-20 07:43:31.753000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-11-20 07:43:31.753027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.602 [2024-11-20 07:43:31.753035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.602 [2024-11-20 07:43:31.753254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.602 [2024-11-20 07:43:31.753473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.602 [2024-11-20 07:43:31.753491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.602 [2024-11-20 07:43:31.753499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.602 [2024-11-20 07:43:31.753506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.602 [2024-11-20 07:43:31.766366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.602 [2024-11-20 07:43:31.767032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-11-20 07:43:31.767094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.602 [2024-11-20 07:43:31.767106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.602 [2024-11-20 07:43:31.767359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.602 [2024-11-20 07:43:31.767585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.602 [2024-11-20 07:43:31.767595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.602 [2024-11-20 07:43:31.767603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.602 [2024-11-20 07:43:31.767613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.602 [2024-11-20 07:43:31.780304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.602 [2024-11-20 07:43:31.781007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-11-20 07:43:31.781070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.602 [2024-11-20 07:43:31.781084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.602 [2024-11-20 07:43:31.781336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.602 [2024-11-20 07:43:31.781562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.602 [2024-11-20 07:43:31.781571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.602 [2024-11-20 07:43:31.781580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.602 [2024-11-20 07:43:31.781591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.602 [2024-11-20 07:43:31.794103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.602 [2024-11-20 07:43:31.794840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-11-20 07:43:31.794903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.602 [2024-11-20 07:43:31.794915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.602 [2024-11-20 07:43:31.795170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.602 [2024-11-20 07:43:31.795395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.602 [2024-11-20 07:43:31.795408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.602 [2024-11-20 07:43:31.795416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.602 [2024-11-20 07:43:31.795435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.865 [2024-11-20 07:43:31.807910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.865 [2024-11-20 07:43:31.808505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-20 07:43:31.808538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.865 [2024-11-20 07:43:31.808549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.865 [2024-11-20 07:43:31.808783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.865 [2024-11-20 07:43:31.809006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.865 [2024-11-20 07:43:31.809017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.865 [2024-11-20 07:43:31.809024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.865 [2024-11-20 07:43:31.809033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.865 [2024-11-20 07:43:31.821864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.865 [2024-11-20 07:43:31.822318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-20 07:43:31.822344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.865 [2024-11-20 07:43:31.822353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.865 [2024-11-20 07:43:31.822572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.865 [2024-11-20 07:43:31.822801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.865 [2024-11-20 07:43:31.822812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.865 [2024-11-20 07:43:31.822820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.865 [2024-11-20 07:43:31.822828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.865 [2024-11-20 07:43:31.835679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.865 [2024-11-20 07:43:31.836262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-20 07:43:31.836286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.865 [2024-11-20 07:43:31.836295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.865 [2024-11-20 07:43:31.836512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.865 [2024-11-20 07:43:31.836733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.865 [2024-11-20 07:43:31.836744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.865 [2024-11-20 07:43:31.836761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.865 [2024-11-20 07:43:31.836770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.865 [2024-11-20 07:43:31.849467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.865 [2024-11-20 07:43:31.850183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-20 07:43:31.850247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.865 [2024-11-20 07:43:31.850259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.865 [2024-11-20 07:43:31.850512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.865 [2024-11-20 07:43:31.850737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.865 [2024-11-20 07:43:31.850763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.865 [2024-11-20 07:43:31.850772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.865 [2024-11-20 07:43:31.850782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.865 [2024-11-20 07:43:31.863441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.865 [2024-11-20 07:43:31.864172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-20 07:43:31.864235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.865 [2024-11-20 07:43:31.864248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.865 [2024-11-20 07:43:31.864501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.865 [2024-11-20 07:43:31.864726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.865 [2024-11-20 07:43:31.864736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.865 [2024-11-20 07:43:31.864744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.865 [2024-11-20 07:43:31.864767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.865 [2024-11-20 07:43:31.877407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.865 [2024-11-20 07:43:31.877941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-20 07:43:31.877971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.865 [2024-11-20 07:43:31.877980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.865 [2024-11-20 07:43:31.878199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.865 [2024-11-20 07:43:31.878419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.865 [2024-11-20 07:43:31.878429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.865 [2024-11-20 07:43:31.878436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.865 [2024-11-20 07:43:31.878445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.865 [2024-11-20 07:43:31.891321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.865 [2024-11-20 07:43:31.891913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-20 07:43:31.891939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.865 [2024-11-20 07:43:31.891948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.865 [2024-11-20 07:43:31.892174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.865 [2024-11-20 07:43:31.892392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.865 [2024-11-20 07:43:31.892403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.865 [2024-11-20 07:43:31.892410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.865 [2024-11-20 07:43:31.892418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.866 [2024-11-20 07:43:31.905309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.866 [2024-11-20 07:43:31.905853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-20 07:43:31.905897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.866 [2024-11-20 07:43:31.905907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.866 [2024-11-20 07:43:31.906145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.866 [2024-11-20 07:43:31.906367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.866 [2024-11-20 07:43:31.906377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.866 [2024-11-20 07:43:31.906386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.866 [2024-11-20 07:43:31.906394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.866 [2024-11-20 07:43:31.919308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.866 [2024-11-20 07:43:31.920093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-20 07:43:31.920156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.866 [2024-11-20 07:43:31.920169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.866 [2024-11-20 07:43:31.920422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.866 [2024-11-20 07:43:31.920647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.866 [2024-11-20 07:43:31.920657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.866 [2024-11-20 07:43:31.920666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.866 [2024-11-20 07:43:31.920676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.866 [2024-11-20 07:43:31.933125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.866 [2024-11-20 07:43:31.933726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-20 07:43:31.933763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.866 [2024-11-20 07:43:31.933772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.866 [2024-11-20 07:43:31.933993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.866 [2024-11-20 07:43:31.934211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.866 [2024-11-20 07:43:31.934229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.866 [2024-11-20 07:43:31.934238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.866 [2024-11-20 07:43:31.934246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.866 [2024-11-20 07:43:31.947084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.866 [2024-11-20 07:43:31.947653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-20 07:43:31.947677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.866 [2024-11-20 07:43:31.947685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.866 [2024-11-20 07:43:31.947912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.866 [2024-11-20 07:43:31.948131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.866 [2024-11-20 07:43:31.948148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.866 [2024-11-20 07:43:31.948156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.866 [2024-11-20 07:43:31.948164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.866 [2024-11-20 07:43:31.961002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.866 [2024-11-20 07:43:31.961659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-20 07:43:31.961722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.866 [2024-11-20 07:43:31.961735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.866 [2024-11-20 07:43:31.962001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.866 [2024-11-20 07:43:31.962227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.866 [2024-11-20 07:43:31.962238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.866 [2024-11-20 07:43:31.962247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.866 [2024-11-20 07:43:31.962257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.866 [2024-11-20 07:43:31.974904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.866 [2024-11-20 07:43:31.975487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-20 07:43:31.975515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.866 [2024-11-20 07:43:31.975526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.866 [2024-11-20 07:43:31.975754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.866 [2024-11-20 07:43:31.975976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.866 [2024-11-20 07:43:31.975985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.866 [2024-11-20 07:43:31.975994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.866 [2024-11-20 07:43:31.976016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.866 [2024-11-20 07:43:31.988858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.866 [2024-11-20 07:43:31.989516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-20 07:43:31.989578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.866 [2024-11-20 07:43:31.989591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.866 [2024-11-20 07:43:31.989855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.866 [2024-11-20 07:43:31.990081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.866 [2024-11-20 07:43:31.990091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.866 [2024-11-20 07:43:31.990100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.866 [2024-11-20 07:43:31.990109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.866 [2024-11-20 07:43:32.002766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.866 [2024-11-20 07:43:32.003361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-20 07:43:32.003390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.866 [2024-11-20 07:43:32.003398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.866 [2024-11-20 07:43:32.003618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.866 [2024-11-20 07:43:32.003846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.866 [2024-11-20 07:43:32.003856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.866 [2024-11-20 07:43:32.003864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.866 [2024-11-20 07:43:32.003872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.866 [2024-11-20 07:43:32.016726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.866 [2024-11-20 07:43:32.017294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-20 07:43:32.017357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.866 [2024-11-20 07:43:32.017369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.866 [2024-11-20 07:43:32.017622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.866 [2024-11-20 07:43:32.017859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.866 [2024-11-20 07:43:32.017870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.866 [2024-11-20 07:43:32.017878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.866 [2024-11-20 07:43:32.017888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.866 [2024-11-20 07:43:32.030531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.866 [2024-11-20 07:43:32.031171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-20 07:43:32.031200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.866 [2024-11-20 07:43:32.031209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.866 [2024-11-20 07:43:32.031430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.866 [2024-11-20 07:43:32.031652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.866 [2024-11-20 07:43:32.031664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.867 [2024-11-20 07:43:32.031675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.867 [2024-11-20 07:43:32.031684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.867 [2024-11-20 07:43:32.044336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.867 [2024-11-20 07:43:32.044827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-20 07:43:32.044852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.867 [2024-11-20 07:43:32.044862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.867 [2024-11-20 07:43:32.045081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.867 [2024-11-20 07:43:32.045301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.867 [2024-11-20 07:43:32.045312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.867 [2024-11-20 07:43:32.045322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.867 [2024-11-20 07:43:32.045331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.867 [2024-11-20 07:43:32.058178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.867 [2024-11-20 07:43:32.058755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-20 07:43:32.058780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:13.867 [2024-11-20 07:43:32.058788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:13.867 [2024-11-20 07:43:32.059006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:13.867 [2024-11-20 07:43:32.059226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.867 [2024-11-20 07:43:32.059237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.867 [2024-11-20 07:43:32.059245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.867 [2024-11-20 07:43:32.059252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.130 [2024-11-20 07:43:32.072084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.130 [2024-11-20 07:43:32.072642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-20 07:43:32.072665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.130 [2024-11-20 07:43:32.072673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.130 [2024-11-20 07:43:32.072905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.130 [2024-11-20 07:43:32.073125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.130 [2024-11-20 07:43:32.073135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.130 [2024-11-20 07:43:32.073143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.130 [2024-11-20 07:43:32.073151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.130 [2024-11-20 07:43:32.085979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.130 [2024-11-20 07:43:32.086551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-20 07:43:32.086573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.130 [2024-11-20 07:43:32.086581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.130 [2024-11-20 07:43:32.086804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.130 [2024-11-20 07:43:32.087024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.130 [2024-11-20 07:43:32.087033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.130 [2024-11-20 07:43:32.087041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.130 [2024-11-20 07:43:32.087048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.130 [2024-11-20 07:43:32.099888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.130 [2024-11-20 07:43:32.100458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-20 07:43:32.100481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.130 [2024-11-20 07:43:32.100489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.130 [2024-11-20 07:43:32.100706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.130 [2024-11-20 07:43:32.100932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.130 [2024-11-20 07:43:32.100941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.130 [2024-11-20 07:43:32.100949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.130 [2024-11-20 07:43:32.100956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.130 [2024-11-20 07:43:32.113799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.130 [2024-11-20 07:43:32.114365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-20 07:43:32.114388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.130 [2024-11-20 07:43:32.114396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.130 [2024-11-20 07:43:32.114614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.130 [2024-11-20 07:43:32.114840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.130 [2024-11-20 07:43:32.114857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.130 [2024-11-20 07:43:32.114865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.130 [2024-11-20 07:43:32.114874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.130 [2024-11-20 07:43:32.127704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.130 [2024-11-20 07:43:32.128282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-20 07:43:32.128307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.130 [2024-11-20 07:43:32.128315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.130 [2024-11-20 07:43:32.128534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.130 [2024-11-20 07:43:32.128759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.130 [2024-11-20 07:43:32.128770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.130 [2024-11-20 07:43:32.128779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.130 [2024-11-20 07:43:32.128787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.130 [2024-11-20 07:43:32.141598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.130 [2024-11-20 07:43:32.142188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-20 07:43:32.142250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.130 [2024-11-20 07:43:32.142263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.130 [2024-11-20 07:43:32.142516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.130 [2024-11-20 07:43:32.142743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.130 [2024-11-20 07:43:32.142765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.130 [2024-11-20 07:43:32.142774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.130 [2024-11-20 07:43:32.142783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.130 [2024-11-20 07:43:32.155424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.130 [2024-11-20 07:43:32.156031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-20 07:43:32.156063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.130 [2024-11-20 07:43:32.156071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.130 [2024-11-20 07:43:32.156292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.130 [2024-11-20 07:43:32.156512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.130 [2024-11-20 07:43:32.156522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.130 [2024-11-20 07:43:32.156530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.130 [2024-11-20 07:43:32.156545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.130 [2024-11-20 07:43:32.169381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.130 [2024-11-20 07:43:32.169946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-20 07:43:32.169973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.131 [2024-11-20 07:43:32.169982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.131 [2024-11-20 07:43:32.170201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.131 [2024-11-20 07:43:32.170420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.131 [2024-11-20 07:43:32.170431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.131 [2024-11-20 07:43:32.170438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.131 [2024-11-20 07:43:32.170446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.131 [2024-11-20 07:43:32.183279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.131 [2024-11-20 07:43:32.183850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-20 07:43:32.183873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.131 [2024-11-20 07:43:32.183881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.131 [2024-11-20 07:43:32.184099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.131 [2024-11-20 07:43:32.184318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.131 [2024-11-20 07:43:32.184327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.131 [2024-11-20 07:43:32.184335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.131 [2024-11-20 07:43:32.184344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.131 [2024-11-20 07:43:32.197193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.131 [2024-11-20 07:43:32.197857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-20 07:43:32.197921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.131 [2024-11-20 07:43:32.197934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.131 [2024-11-20 07:43:32.198187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.131 [2024-11-20 07:43:32.198413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.131 [2024-11-20 07:43:32.198423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.131 [2024-11-20 07:43:32.198432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.131 [2024-11-20 07:43:32.198441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.131 [2024-11-20 07:43:32.210175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.131 [2024-11-20 07:43:32.210763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-20 07:43:32.210787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.131 [2024-11-20 07:43:32.210794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.131 [2024-11-20 07:43:32.210948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.131 [2024-11-20 07:43:32.211100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.131 [2024-11-20 07:43:32.211107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.131 [2024-11-20 07:43:32.211113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.131 [2024-11-20 07:43:32.211119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.131 [2024-11-20 07:43:32.222908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.131 [2024-11-20 07:43:32.223431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-20 07:43:32.223451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.131 [2024-11-20 07:43:32.223457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.131 [2024-11-20 07:43:32.223609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.131 [2024-11-20 07:43:32.223767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.131 [2024-11-20 07:43:32.223774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.131 [2024-11-20 07:43:32.223780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.131 [2024-11-20 07:43:32.223785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.131 [2024-11-20 07:43:32.235556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.131 [2024-11-20 07:43:32.236078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-20 07:43:32.236127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.131 [2024-11-20 07:43:32.236136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.131 [2024-11-20 07:43:32.236312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.131 [2024-11-20 07:43:32.236468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.131 [2024-11-20 07:43:32.236476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.131 [2024-11-20 07:43:32.236481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.131 [2024-11-20 07:43:32.236488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.131 [2024-11-20 07:43:32.248276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.131 [2024-11-20 07:43:32.248955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-20 07:43:32.249000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.131 [2024-11-20 07:43:32.249009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.131 [2024-11-20 07:43:32.249189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.131 [2024-11-20 07:43:32.249343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.131 [2024-11-20 07:43:32.249351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.131 [2024-11-20 07:43:32.249357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.131 [2024-11-20 07:43:32.249364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.131 [2024-11-20 07:43:32.261011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.131 [2024-11-20 07:43:32.261591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-20 07:43:32.261634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.131 [2024-11-20 07:43:32.261643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.131 [2024-11-20 07:43:32.261830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.131 [2024-11-20 07:43:32.261986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.131 [2024-11-20 07:43:32.261993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.131 [2024-11-20 07:43:32.261998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.131 [2024-11-20 07:43:32.262005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.132 [2024-11-20 07:43:32.273670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.132 [2024-11-20 07:43:32.274239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-20 07:43:32.274280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.132 [2024-11-20 07:43:32.274289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.132 [2024-11-20 07:43:32.274463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.132 [2024-11-20 07:43:32.274617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.132 [2024-11-20 07:43:32.274624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.132 [2024-11-20 07:43:32.274629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.132 [2024-11-20 07:43:32.274635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.132 [2024-11-20 07:43:32.286300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.132 [2024-11-20 07:43:32.286803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-20 07:43:32.286843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.132 [2024-11-20 07:43:32.286854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.132 [2024-11-20 07:43:32.287028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.132 [2024-11-20 07:43:32.287183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.132 [2024-11-20 07:43:32.287198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.132 [2024-11-20 07:43:32.287205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.132 [2024-11-20 07:43:32.287213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.132 [2024-11-20 07:43:32.299094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.132 [2024-11-20 07:43:32.299585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-20 07:43:32.299603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.132 [2024-11-20 07:43:32.299610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.132 [2024-11-20 07:43:32.299769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.132 [2024-11-20 07:43:32.299922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.132 [2024-11-20 07:43:32.299930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.132 [2024-11-20 07:43:32.299936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.132 [2024-11-20 07:43:32.299942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.132 [2024-11-20 07:43:32.311696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.132 [2024-11-20 07:43:32.312344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-20 07:43:32.312379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.132 [2024-11-20 07:43:32.312388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.132 [2024-11-20 07:43:32.312555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.132 [2024-11-20 07:43:32.312708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.132 [2024-11-20 07:43:32.312715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.132 [2024-11-20 07:43:32.312721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.132 [2024-11-20 07:43:32.312727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.132 [2024-11-20 07:43:32.324351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.132 [2024-11-20 07:43:32.324858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-20 07:43:32.324893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.132 [2024-11-20 07:43:32.324902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.132 [2024-11-20 07:43:32.325073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.132 [2024-11-20 07:43:32.325226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.132 [2024-11-20 07:43:32.325232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.132 [2024-11-20 07:43:32.325238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.132 [2024-11-20 07:43:32.325248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.394 [2024-11-20 07:43:32.337010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.394 [2024-11-20 07:43:32.337516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.394 [2024-11-20 07:43:32.337531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.394 [2024-11-20 07:43:32.337537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.394 [2024-11-20 07:43:32.337686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.394 [2024-11-20 07:43:32.337841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.394 [2024-11-20 07:43:32.337847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.394 [2024-11-20 07:43:32.337853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.394 [2024-11-20 07:43:32.337857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.394 [2024-11-20 07:43:32.349589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.394 [2024-11-20 07:43:32.350081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.394 [2024-11-20 07:43:32.350095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.394 [2024-11-20 07:43:32.350101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.394 [2024-11-20 07:43:32.350250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.394 [2024-11-20 07:43:32.350399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.394 [2024-11-20 07:43:32.350405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.394 [2024-11-20 07:43:32.350410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.394 [2024-11-20 07:43:32.350415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.394 [2024-11-20 07:43:32.362296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.394 [2024-11-20 07:43:32.362849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.394 [2024-11-20 07:43:32.362880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.394 [2024-11-20 07:43:32.362889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.394 [2024-11-20 07:43:32.363057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.394 [2024-11-20 07:43:32.363209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.394 [2024-11-20 07:43:32.363216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.394 [2024-11-20 07:43:32.363222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.394 [2024-11-20 07:43:32.363228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.394 [2024-11-20 07:43:32.374989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.394 [2024-11-20 07:43:32.375479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.394 [2024-11-20 07:43:32.375493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.394 [2024-11-20 07:43:32.375499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.394 [2024-11-20 07:43:32.375648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.394 [2024-11-20 07:43:32.375801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.394 [2024-11-20 07:43:32.375807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.394 [2024-11-20 07:43:32.375812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.394 [2024-11-20 07:43:32.375817] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.394 [2024-11-20 07:43:32.387700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.394 [2024-11-20 07:43:32.388253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.394 [2024-11-20 07:43:32.388283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.394 [2024-11-20 07:43:32.388292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.394 [2024-11-20 07:43:32.388457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.394 [2024-11-20 07:43:32.388609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.394 [2024-11-20 07:43:32.388616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.394 [2024-11-20 07:43:32.388621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.394 [2024-11-20 07:43:32.388628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.394 [2024-11-20 07:43:32.400389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.394 [2024-11-20 07:43:32.400792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.394 [2024-11-20 07:43:32.400807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.394 [2024-11-20 07:43:32.400813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.394 [2024-11-20 07:43:32.400963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.394 [2024-11-20 07:43:32.401112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.395 [2024-11-20 07:43:32.401118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.395 [2024-11-20 07:43:32.401123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.395 [2024-11-20 07:43:32.401128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.395 [2024-11-20 07:43:32.413030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.395 [2024-11-20 07:43:32.413602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.395 [2024-11-20 07:43:32.413633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.395 [2024-11-20 07:43:32.413642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.395 [2024-11-20 07:43:32.413818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.395 [2024-11-20 07:43:32.413972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.395 [2024-11-20 07:43:32.413978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.395 [2024-11-20 07:43:32.413984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.395 [2024-11-20 07:43:32.413990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.395 [2024-11-20 07:43:32.425734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.395 [2024-11-20 07:43:32.426186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.395 [2024-11-20 07:43:32.426201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.395 [2024-11-20 07:43:32.426207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.395 [2024-11-20 07:43:32.426356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.395 [2024-11-20 07:43:32.426505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.395 [2024-11-20 07:43:32.426511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.395 [2024-11-20 07:43:32.426516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.395 [2024-11-20 07:43:32.426521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.395 [2024-11-20 07:43:32.438406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.395 [2024-11-20 07:43:32.439083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.395 [2024-11-20 07:43:32.439113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.395 [2024-11-20 07:43:32.439122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.395 [2024-11-20 07:43:32.439286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.395 [2024-11-20 07:43:32.439438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.395 [2024-11-20 07:43:32.439445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.395 [2024-11-20 07:43:32.439451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.395 [2024-11-20 07:43:32.439457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.395 [2024-11-20 07:43:32.451069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.395 [2024-11-20 07:43:32.451516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.395 [2024-11-20 07:43:32.451530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.395 [2024-11-20 07:43:32.451536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.395 [2024-11-20 07:43:32.451685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.395 [2024-11-20 07:43:32.451839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.395 [2024-11-20 07:43:32.451849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.395 [2024-11-20 07:43:32.451854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.395 [2024-11-20 07:43:32.451859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.395 [2024-11-20 07:43:32.463742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.395 [2024-11-20 07:43:32.464318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.395 [2024-11-20 07:43:32.464348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.395 [2024-11-20 07:43:32.464356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.395 [2024-11-20 07:43:32.464521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.395 [2024-11-20 07:43:32.464674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.395 [2024-11-20 07:43:32.464681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.395 [2024-11-20 07:43:32.464686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.395 [2024-11-20 07:43:32.464692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.395 [2024-11-20 07:43:32.476443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.395 [2024-11-20 07:43:32.476919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.395 [2024-11-20 07:43:32.476934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.395 [2024-11-20 07:43:32.476940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.395 [2024-11-20 07:43:32.477089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.395 [2024-11-20 07:43:32.477238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.395 [2024-11-20 07:43:32.477244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.395 [2024-11-20 07:43:32.477248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.395 [2024-11-20 07:43:32.477254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.395 [2024-11-20 07:43:32.489127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.395 [2024-11-20 07:43:32.489575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.395 [2024-11-20 07:43:32.489587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.395 [2024-11-20 07:43:32.489593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.395 [2024-11-20 07:43:32.489741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.395 [2024-11-20 07:43:32.489896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.395 [2024-11-20 07:43:32.489902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.395 [2024-11-20 07:43:32.489907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.395 [2024-11-20 07:43:32.489915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.395 [2024-11-20 07:43:32.501798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.395 [2024-11-20 07:43:32.502275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.395 [2024-11-20 07:43:32.502288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.395 [2024-11-20 07:43:32.502293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.395 [2024-11-20 07:43:32.502442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.395 [2024-11-20 07:43:32.502590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.395 [2024-11-20 07:43:32.502596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.395 [2024-11-20 07:43:32.502601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.395 [2024-11-20 07:43:32.502606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.395 [2024-11-20 07:43:32.514499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.395 [2024-11-20 07:43:32.514885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.395 [2024-11-20 07:43:32.514898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.395 [2024-11-20 07:43:32.514903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.395 [2024-11-20 07:43:32.515052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.395 [2024-11-20 07:43:32.515201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.395 [2024-11-20 07:43:32.515206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.395 [2024-11-20 07:43:32.515211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.395 [2024-11-20 07:43:32.515216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.395 [2024-11-20 07:43:32.527095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.395 [2024-11-20 07:43:32.527572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.395 [2024-11-20 07:43:32.527584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.395 [2024-11-20 07:43:32.527589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.395 [2024-11-20 07:43:32.527738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.396 [2024-11-20 07:43:32.527891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.396 [2024-11-20 07:43:32.527897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.396 [2024-11-20 07:43:32.527902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.396 [2024-11-20 07:43:32.527907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.396 [2024-11-20 07:43:32.539790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.396 [2024-11-20 07:43:32.540153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.396 [2024-11-20 07:43:32.540166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.396 [2024-11-20 07:43:32.540172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.396 [2024-11-20 07:43:32.540321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.396 [2024-11-20 07:43:32.540470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.396 [2024-11-20 07:43:32.540476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.396 [2024-11-20 07:43:32.540481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.396 [2024-11-20 07:43:32.540486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.396 [2024-11-20 07:43:32.552369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.396 [2024-11-20 07:43:32.552948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.396 [2024-11-20 07:43:32.552979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.396 [2024-11-20 07:43:32.552988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.396 [2024-11-20 07:43:32.553153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.396 [2024-11-20 07:43:32.553305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.396 [2024-11-20 07:43:32.553312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.396 [2024-11-20 07:43:32.553319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.396 [2024-11-20 07:43:32.553325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.396 [2024-11-20 07:43:32.565076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.396 [2024-11-20 07:43:32.565604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.396 [2024-11-20 07:43:32.565618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.396 [2024-11-20 07:43:32.565624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.396 [2024-11-20 07:43:32.565779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.396 [2024-11-20 07:43:32.565929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.396 [2024-11-20 07:43:32.565935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.396 [2024-11-20 07:43:32.565940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.396 [2024-11-20 07:43:32.565945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.396 [2024-11-20 07:43:32.577663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.396 [2024-11-20 07:43:32.578215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.396 [2024-11-20 07:43:32.578245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.396 [2024-11-20 07:43:32.578254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.396 [2024-11-20 07:43:32.578423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.396 [2024-11-20 07:43:32.578575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.396 [2024-11-20 07:43:32.578581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.396 [2024-11-20 07:43:32.578587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.396 [2024-11-20 07:43:32.578592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.396 6960.25 IOPS, 27.19 MiB/s [2024-11-20T06:43:32.606Z] [2024-11-20 07:43:32.591495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.396 [2024-11-20 07:43:32.592061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.396 [2024-11-20 07:43:32.592092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.396 [2024-11-20 07:43:32.592100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.396 [2024-11-20 07:43:32.592265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.396 [2024-11-20 07:43:32.592425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.396 [2024-11-20 07:43:32.592432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.396 [2024-11-20 07:43:32.592438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.396 [2024-11-20 07:43:32.592443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.658 [2024-11-20 07:43:32.604197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.658 [2024-11-20 07:43:32.604772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-11-20 07:43:32.604802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.658 [2024-11-20 07:43:32.604811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.658 [2024-11-20 07:43:32.604978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.658 [2024-11-20 07:43:32.605130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.658 [2024-11-20 07:43:32.605137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.658 [2024-11-20 07:43:32.605142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.658 [2024-11-20 07:43:32.605148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.658 [2024-11-20 07:43:32.616919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.658 [2024-11-20 07:43:32.617489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-11-20 07:43:32.617519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.658 [2024-11-20 07:43:32.617528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.658 [2024-11-20 07:43:32.617692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.658 [2024-11-20 07:43:32.617856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.658 [2024-11-20 07:43:32.617863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.658 [2024-11-20 07:43:32.617869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.658 [2024-11-20 07:43:32.617875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.658 [2024-11-20 07:43:32.629625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.658 [2024-11-20 07:43:32.630174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.658 [2024-11-20 07:43:32.630205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.658 [2024-11-20 07:43:32.630214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.658 [2024-11-20 07:43:32.630379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.658 [2024-11-20 07:43:32.630531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.658 [2024-11-20 07:43:32.630538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.658 [2024-11-20 07:43:32.630544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.659 [2024-11-20 07:43:32.630551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.659 [2024-11-20 07:43:32.642306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.659 [2024-11-20 07:43:32.642792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-11-20 07:43:32.642807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.659 [2024-11-20 07:43:32.642813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.659 [2024-11-20 07:43:32.642963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.659 [2024-11-20 07:43:32.643112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.659 [2024-11-20 07:43:32.643118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.659 [2024-11-20 07:43:32.643123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.659 [2024-11-20 07:43:32.643129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.659 [2024-11-20 07:43:32.655015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.659 [2024-11-20 07:43:32.655551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-11-20 07:43:32.655582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.659 [2024-11-20 07:43:32.655591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.659 [2024-11-20 07:43:32.655762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.659 [2024-11-20 07:43:32.655916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.659 [2024-11-20 07:43:32.655922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.659 [2024-11-20 07:43:32.655928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.659 [2024-11-20 07:43:32.655937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.659 [2024-11-20 07:43:32.667682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.659 [2024-11-20 07:43:32.668120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-11-20 07:43:32.668150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.659 [2024-11-20 07:43:32.668159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.659 [2024-11-20 07:43:32.668326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.659 [2024-11-20 07:43:32.668478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.659 [2024-11-20 07:43:32.668484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.659 [2024-11-20 07:43:32.668489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.659 [2024-11-20 07:43:32.668495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.659 [2024-11-20 07:43:32.680391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.659 [2024-11-20 07:43:32.680885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-11-20 07:43:32.680915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.659 [2024-11-20 07:43:32.680924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.659 [2024-11-20 07:43:32.681091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.659 [2024-11-20 07:43:32.681243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.659 [2024-11-20 07:43:32.681249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.659 [2024-11-20 07:43:32.681255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.659 [2024-11-20 07:43:32.681261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.659 [2024-11-20 07:43:32.693012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.659 [2024-11-20 07:43:32.693597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-11-20 07:43:32.693627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.659 [2024-11-20 07:43:32.693635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.659 [2024-11-20 07:43:32.693807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.659 [2024-11-20 07:43:32.693959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.659 [2024-11-20 07:43:32.693966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.659 [2024-11-20 07:43:32.693971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.659 [2024-11-20 07:43:32.693977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.659 [2024-11-20 07:43:32.705718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.659 [2024-11-20 07:43:32.706296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-11-20 07:43:32.706326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.659 [2024-11-20 07:43:32.706334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.659 [2024-11-20 07:43:32.706499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.659 [2024-11-20 07:43:32.706651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.659 [2024-11-20 07:43:32.706658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.659 [2024-11-20 07:43:32.706663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.659 [2024-11-20 07:43:32.706669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.659 [2024-11-20 07:43:32.718429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.659 [2024-11-20 07:43:32.718963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-11-20 07:43:32.718994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.659 [2024-11-20 07:43:32.719002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.659 [2024-11-20 07:43:32.719167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.659 [2024-11-20 07:43:32.719319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.659 [2024-11-20 07:43:32.719326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.659 [2024-11-20 07:43:32.719331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.659 [2024-11-20 07:43:32.719337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.659 [2024-11-20 07:43:32.731079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.659 [2024-11-20 07:43:32.731655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-11-20 07:43:32.731685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.659 [2024-11-20 07:43:32.731693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.659 [2024-11-20 07:43:32.731865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.659 [2024-11-20 07:43:32.732017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.659 [2024-11-20 07:43:32.732023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.659 [2024-11-20 07:43:32.732029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.659 [2024-11-20 07:43:32.732035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.659 [2024-11-20 07:43:32.743775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.659 [2024-11-20 07:43:32.744353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.659 [2024-11-20 07:43:32.744383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.659 [2024-11-20 07:43:32.744395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.659 [2024-11-20 07:43:32.744560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.660 [2024-11-20 07:43:32.744713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.660 [2024-11-20 07:43:32.744719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.660 [2024-11-20 07:43:32.744724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.660 [2024-11-20 07:43:32.744730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.660 [2024-11-20 07:43:32.756479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.660 [2024-11-20 07:43:32.757049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-11-20 07:43:32.757080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.660 [2024-11-20 07:43:32.757089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.660 [2024-11-20 07:43:32.757253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.660 [2024-11-20 07:43:32.757405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.660 [2024-11-20 07:43:32.757412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.660 [2024-11-20 07:43:32.757417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.660 [2024-11-20 07:43:32.757423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.660 [2024-11-20 07:43:32.769176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.660 [2024-11-20 07:43:32.769758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-11-20 07:43:32.769788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.660 [2024-11-20 07:43:32.769797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.660 [2024-11-20 07:43:32.769963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.660 [2024-11-20 07:43:32.770115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.660 [2024-11-20 07:43:32.770122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.660 [2024-11-20 07:43:32.770127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.660 [2024-11-20 07:43:32.770132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.660 [2024-11-20 07:43:32.781884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.660 [2024-11-20 07:43:32.782454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-11-20 07:43:32.782484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.660 [2024-11-20 07:43:32.782492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.660 [2024-11-20 07:43:32.782657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.660 [2024-11-20 07:43:32.782817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.660 [2024-11-20 07:43:32.782828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.660 [2024-11-20 07:43:32.782834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.660 [2024-11-20 07:43:32.782840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.660 [2024-11-20 07:43:32.794582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.660 [2024-11-20 07:43:32.795161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-11-20 07:43:32.795191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.660 [2024-11-20 07:43:32.795199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.660 [2024-11-20 07:43:32.795364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.660 [2024-11-20 07:43:32.795516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.660 [2024-11-20 07:43:32.795522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.660 [2024-11-20 07:43:32.795528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.660 [2024-11-20 07:43:32.795533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.660 [2024-11-20 07:43:32.807285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.660 [2024-11-20 07:43:32.807846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-11-20 07:43:32.807878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.660 [2024-11-20 07:43:32.807887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.660 [2024-11-20 07:43:32.808054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.660 [2024-11-20 07:43:32.808206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.660 [2024-11-20 07:43:32.808213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.660 [2024-11-20 07:43:32.808218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.660 [2024-11-20 07:43:32.808224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.660 [2024-11-20 07:43:32.819986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.660 [2024-11-20 07:43:32.820528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-11-20 07:43:32.820558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.660 [2024-11-20 07:43:32.820566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.660 [2024-11-20 07:43:32.820731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.660 [2024-11-20 07:43:32.820889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.660 [2024-11-20 07:43:32.820897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.660 [2024-11-20 07:43:32.820902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.660 [2024-11-20 07:43:32.820911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.660 [2024-11-20 07:43:32.832689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.660 [2024-11-20 07:43:32.833270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-11-20 07:43:32.833300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.660 [2024-11-20 07:43:32.833309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.660 [2024-11-20 07:43:32.833474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.660 [2024-11-20 07:43:32.833625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.660 [2024-11-20 07:43:32.833632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.660 [2024-11-20 07:43:32.833637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.660 [2024-11-20 07:43:32.833643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.660 [2024-11-20 07:43:32.845391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.660 [2024-11-20 07:43:32.845864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.660 [2024-11-20 07:43:32.845894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.660 [2024-11-20 07:43:32.845903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.660 [2024-11-20 07:43:32.846070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.660 [2024-11-20 07:43:32.846222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.661 [2024-11-20 07:43:32.846229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.661 [2024-11-20 07:43:32.846234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.661 [2024-11-20 07:43:32.846240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.661 [2024-11-20 07:43:32.857990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.661 [2024-11-20 07:43:32.858562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.661 [2024-11-20 07:43:32.858592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.661 [2024-11-20 07:43:32.858601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.661 [2024-11-20 07:43:32.858772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.661 [2024-11-20 07:43:32.858925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.661 [2024-11-20 07:43:32.858931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.661 [2024-11-20 07:43:32.858937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.661 [2024-11-20 07:43:32.858943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.923 [2024-11-20 07:43:32.870690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.923 [2024-11-20 07:43:32.871188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.923 [2024-11-20 07:43:32.871203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.923 [2024-11-20 07:43:32.871208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.923 [2024-11-20 07:43:32.871357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.923 [2024-11-20 07:43:32.871506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.923 [2024-11-20 07:43:32.871512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.923 [2024-11-20 07:43:32.871517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.923 [2024-11-20 07:43:32.871522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.923 [2024-11-20 07:43:32.883393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.923 [2024-11-20 07:43:32.883855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.923 [2024-11-20 07:43:32.883868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.923 [2024-11-20 07:43:32.883874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.923 [2024-11-20 07:43:32.884022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.923 [2024-11-20 07:43:32.884171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.923 [2024-11-20 07:43:32.884177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.923 [2024-11-20 07:43:32.884182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.923 [2024-11-20 07:43:32.884187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.923 [2024-11-20 07:43:32.896072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.923 [2024-11-20 07:43:32.896639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.923 [2024-11-20 07:43:32.896669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.923 [2024-11-20 07:43:32.896678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.923 [2024-11-20 07:43:32.896849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.923 [2024-11-20 07:43:32.897002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.923 [2024-11-20 07:43:32.897009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.923 [2024-11-20 07:43:32.897014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.923 [2024-11-20 07:43:32.897020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.923 [2024-11-20 07:43:32.908766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.923 [2024-11-20 07:43:32.909349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.923 [2024-11-20 07:43:32.909380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.923 [2024-11-20 07:43:32.909391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.923 [2024-11-20 07:43:32.909556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.923 [2024-11-20 07:43:32.909708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.923 [2024-11-20 07:43:32.909715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.923 [2024-11-20 07:43:32.909720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.923 [2024-11-20 07:43:32.909726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.923 [2024-11-20 07:43:32.921477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.923 [2024-11-20 07:43:32.922042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.923 [2024-11-20 07:43:32.922072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.923 [2024-11-20 07:43:32.922081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.923 [2024-11-20 07:43:32.922245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.923 [2024-11-20 07:43:32.922397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.923 [2024-11-20 07:43:32.922403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.923 [2024-11-20 07:43:32.922409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.923 [2024-11-20 07:43:32.922415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.923 [2024-11-20 07:43:32.934167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.923 [2024-11-20 07:43:32.934671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.923 [2024-11-20 07:43:32.934701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.923 [2024-11-20 07:43:32.934710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.923 [2024-11-20 07:43:32.934884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.923 [2024-11-20 07:43:32.935037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.923 [2024-11-20 07:43:32.935043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.923 [2024-11-20 07:43:32.935049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.923 [2024-11-20 07:43:32.935055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.923 [2024-11-20 07:43:32.946798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.923 [2024-11-20 07:43:32.947378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.923 [2024-11-20 07:43:32.947408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.923 [2024-11-20 07:43:32.947416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.923 [2024-11-20 07:43:32.947584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.923 [2024-11-20 07:43:32.947736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.923 [2024-11-20 07:43:32.947753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.923 [2024-11-20 07:43:32.947759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.923 [2024-11-20 07:43:32.947765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.923 [2024-11-20 07:43:32.959402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.923 [2024-11-20 07:43:32.959862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.923 [2024-11-20 07:43:32.959892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.923 [2024-11-20 07:43:32.959900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.923 [2024-11-20 07:43:32.960067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.923 [2024-11-20 07:43:32.960218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.923 [2024-11-20 07:43:32.960225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.923 [2024-11-20 07:43:32.960230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.923 [2024-11-20 07:43:32.960236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.923 [2024-11-20 07:43:32.971984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.923 [2024-11-20 07:43:32.972555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.923 [2024-11-20 07:43:32.972585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.923 [2024-11-20 07:43:32.972594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.923 [2024-11-20 07:43:32.972765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.924 [2024-11-20 07:43:32.972918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.924 [2024-11-20 07:43:32.972924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.924 [2024-11-20 07:43:32.972930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.924 [2024-11-20 07:43:32.972936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.924 [2024-11-20 07:43:32.984676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.924 [2024-11-20 07:43:32.985242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.924 [2024-11-20 07:43:32.985272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.924 [2024-11-20 07:43:32.985281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.924 [2024-11-20 07:43:32.985445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.924 [2024-11-20 07:43:32.985598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.924 [2024-11-20 07:43:32.985604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.924 [2024-11-20 07:43:32.985610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.924 [2024-11-20 07:43:32.985619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.924 [2024-11-20 07:43:32.997378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.924 [2024-11-20 07:43:32.997980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.924 [2024-11-20 07:43:32.998010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.924 [2024-11-20 07:43:32.998019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.924 [2024-11-20 07:43:32.998186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.924 [2024-11-20 07:43:32.998338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.924 [2024-11-20 07:43:32.998345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.924 [2024-11-20 07:43:32.998350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.924 [2024-11-20 07:43:32.998356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.924 [2024-11-20 07:43:33.009967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.924 [2024-11-20 07:43:33.010538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.924 [2024-11-20 07:43:33.010568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.924 [2024-11-20 07:43:33.010577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.924 [2024-11-20 07:43:33.010741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.924 [2024-11-20 07:43:33.010902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.924 [2024-11-20 07:43:33.010908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.924 [2024-11-20 07:43:33.010914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.924 [2024-11-20 07:43:33.010921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.924 [2024-11-20 07:43:33.022705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.924 [2024-11-20 07:43:33.023280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.924 [2024-11-20 07:43:33.023310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.924 [2024-11-20 07:43:33.023319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.924 [2024-11-20 07:43:33.023483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.924 [2024-11-20 07:43:33.023635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.924 [2024-11-20 07:43:33.023642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.924 [2024-11-20 07:43:33.023647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.924 [2024-11-20 07:43:33.023653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.924 [2024-11-20 07:43:33.035403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.924 [2024-11-20 07:43:33.035876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.924 [2024-11-20 07:43:33.035911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.924 [2024-11-20 07:43:33.035919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.924 [2024-11-20 07:43:33.036086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.924 [2024-11-20 07:43:33.036238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.924 [2024-11-20 07:43:33.036244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.924 [2024-11-20 07:43:33.036250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.924 [2024-11-20 07:43:33.036255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.924 [2024-11-20 07:43:33.048006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.924 [2024-11-20 07:43:33.048484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.924 [2024-11-20 07:43:33.048514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.924 [2024-11-20 07:43:33.048523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.924 [2024-11-20 07:43:33.048688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.924 [2024-11-20 07:43:33.048850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.924 [2024-11-20 07:43:33.048858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.924 [2024-11-20 07:43:33.048865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.924 [2024-11-20 07:43:33.048871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.924 [2024-11-20 07:43:33.060620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.924 [2024-11-20 07:43:33.061177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.924 [2024-11-20 07:43:33.061207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.924 [2024-11-20 07:43:33.061215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.924 [2024-11-20 07:43:33.061380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.924 [2024-11-20 07:43:33.061532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.924 [2024-11-20 07:43:33.061538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.924 [2024-11-20 07:43:33.061543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.924 [2024-11-20 07:43:33.061549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.924 [2024-11-20 07:43:33.073287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.924 [2024-11-20 07:43:33.073828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.924 [2024-11-20 07:43:33.073858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.924 [2024-11-20 07:43:33.073870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.924 [2024-11-20 07:43:33.074037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.924 [2024-11-20 07:43:33.074190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.924 [2024-11-20 07:43:33.074196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.924 [2024-11-20 07:43:33.074202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.924 [2024-11-20 07:43:33.074208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.924 [2024-11-20 07:43:33.085955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.924 [2024-11-20 07:43:33.086492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.924 [2024-11-20 07:43:33.086522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.924 [2024-11-20 07:43:33.086531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.924 [2024-11-20 07:43:33.086695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.924 [2024-11-20 07:43:33.086854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.924 [2024-11-20 07:43:33.086862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.924 [2024-11-20 07:43:33.086868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.924 [2024-11-20 07:43:33.086874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.924 [2024-11-20 07:43:33.098624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.924 [2024-11-20 07:43:33.099182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.924 [2024-11-20 07:43:33.099212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.924 [2024-11-20 07:43:33.099221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.925 [2024-11-20 07:43:33.099385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.925 [2024-11-20 07:43:33.099537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.925 [2024-11-20 07:43:33.099544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.925 [2024-11-20 07:43:33.099550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.925 [2024-11-20 07:43:33.099555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.925 [2024-11-20 07:43:33.111302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.925 [2024-11-20 07:43:33.111796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.925 [2024-11-20 07:43:33.111812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.925 [2024-11-20 07:43:33.111818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.925 [2024-11-20 07:43:33.111967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.925 [2024-11-20 07:43:33.112116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.925 [2024-11-20 07:43:33.112126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.925 [2024-11-20 07:43:33.112131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.925 [2024-11-20 07:43:33.112135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.925 [2024-11-20 07:43:33.123880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.925 [2024-11-20 07:43:33.124467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.925 [2024-11-20 07:43:33.124497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:14.925 [2024-11-20 07:43:33.124506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:14.925 [2024-11-20 07:43:33.124670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:14.925 [2024-11-20 07:43:33.124830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.925 [2024-11-20 07:43:33.124837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.925 [2024-11-20 07:43:33.124843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.925 [2024-11-20 07:43:33.124849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.187 [2024-11-20 07:43:33.136591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.187 [2024-11-20 07:43:33.137183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.187 [2024-11-20 07:43:33.137213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.187 [2024-11-20 07:43:33.137222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.187 [2024-11-20 07:43:33.137386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.187 [2024-11-20 07:43:33.137538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.187 [2024-11-20 07:43:33.137545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.187 [2024-11-20 07:43:33.137550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.187 [2024-11-20 07:43:33.137556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.187 [2024-11-20 07:43:33.149302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.187 [2024-11-20 07:43:33.149852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.187 [2024-11-20 07:43:33.149882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.187 [2024-11-20 07:43:33.149891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.187 [2024-11-20 07:43:33.150057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.187 [2024-11-20 07:43:33.150209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.187 [2024-11-20 07:43:33.150216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.187 [2024-11-20 07:43:33.150222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.187 [2024-11-20 07:43:33.150231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.187 [2024-11-20 07:43:33.161973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.187 [2024-11-20 07:43:33.162558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.187 [2024-11-20 07:43:33.162589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.187 [2024-11-20 07:43:33.162597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.187 [2024-11-20 07:43:33.162769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.187 [2024-11-20 07:43:33.162922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.187 [2024-11-20 07:43:33.162929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.187 [2024-11-20 07:43:33.162934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.187 [2024-11-20 07:43:33.162940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.187 [2024-11-20 07:43:33.174683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.187 [2024-11-20 07:43:33.175266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.187 [2024-11-20 07:43:33.175297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.187 [2024-11-20 07:43:33.175306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.187 [2024-11-20 07:43:33.175470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.187 [2024-11-20 07:43:33.175623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.187 [2024-11-20 07:43:33.175629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.187 [2024-11-20 07:43:33.175634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.187 [2024-11-20 07:43:33.175640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.187 [2024-11-20 07:43:33.187392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.187 [2024-11-20 07:43:33.187829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.187 [2024-11-20 07:43:33.187859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.187 [2024-11-20 07:43:33.187868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.187 [2024-11-20 07:43:33.188035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.187 [2024-11-20 07:43:33.188187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.187 [2024-11-20 07:43:33.188194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.187 [2024-11-20 07:43:33.188199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.187 [2024-11-20 07:43:33.188205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.187 [2024-11-20 07:43:33.200106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.187 [2024-11-20 07:43:33.200683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.187 [2024-11-20 07:43:33.200713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.187 [2024-11-20 07:43:33.200722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.187 [2024-11-20 07:43:33.200894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.188 [2024-11-20 07:43:33.201047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.188 [2024-11-20 07:43:33.201053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.188 [2024-11-20 07:43:33.201059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.188 [2024-11-20 07:43:33.201065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.188 [2024-11-20 07:43:33.212818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.188 [2024-11-20 07:43:33.213366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.188 [2024-11-20 07:43:33.213396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.188 [2024-11-20 07:43:33.213405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.188 [2024-11-20 07:43:33.213569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.188 [2024-11-20 07:43:33.213722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.188 [2024-11-20 07:43:33.213728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.188 [2024-11-20 07:43:33.213734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.188 [2024-11-20 07:43:33.213740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.188 [2024-11-20 07:43:33.225497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.188 [2024-11-20 07:43:33.225976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.188 [2024-11-20 07:43:33.225991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.188 [2024-11-20 07:43:33.225997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.188 [2024-11-20 07:43:33.226146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.188 [2024-11-20 07:43:33.226295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.188 [2024-11-20 07:43:33.226301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.188 [2024-11-20 07:43:33.226306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.188 [2024-11-20 07:43:33.226311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.188 [2024-11-20 07:43:33.238187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.188 [2024-11-20 07:43:33.238667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.188 [2024-11-20 07:43:33.238680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.188 [2024-11-20 07:43:33.238685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.188 [2024-11-20 07:43:33.238846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.188 [2024-11-20 07:43:33.238996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.188 [2024-11-20 07:43:33.239002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.188 [2024-11-20 07:43:33.239007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.188 [2024-11-20 07:43:33.239012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.188 [2024-11-20 07:43:33.250885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.188 [2024-11-20 07:43:33.251473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.188 [2024-11-20 07:43:33.251503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.188 [2024-11-20 07:43:33.251511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.188 [2024-11-20 07:43:33.251676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.188 [2024-11-20 07:43:33.251835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.188 [2024-11-20 07:43:33.251842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.188 [2024-11-20 07:43:33.251848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.188 [2024-11-20 07:43:33.251854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.188 [2024-11-20 07:43:33.263581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.188 [2024-11-20 07:43:33.264162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.188 [2024-11-20 07:43:33.264192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.188 [2024-11-20 07:43:33.264201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.188 [2024-11-20 07:43:33.264366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.188 [2024-11-20 07:43:33.264518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.188 [2024-11-20 07:43:33.264525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.188 [2024-11-20 07:43:33.264530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.188 [2024-11-20 07:43:33.264536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.188 [2024-11-20 07:43:33.276280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.188 [2024-11-20 07:43:33.276840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.188 [2024-11-20 07:43:33.276871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.188 [2024-11-20 07:43:33.276880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.188 [2024-11-20 07:43:33.277045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.188 [2024-11-20 07:43:33.277197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.188 [2024-11-20 07:43:33.277209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.188 [2024-11-20 07:43:33.277215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.188 [2024-11-20 07:43:33.277220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.188 [2024-11-20 07:43:33.288972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.188 [2024-11-20 07:43:33.289468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.188 [2024-11-20 07:43:33.289483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.188 [2024-11-20 07:43:33.289489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.188 [2024-11-20 07:43:33.289638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.188 [2024-11-20 07:43:33.289795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.188 [2024-11-20 07:43:33.289801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.188 [2024-11-20 07:43:33.289807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.188 [2024-11-20 07:43:33.289812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.188 [2024-11-20 07:43:33.301554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.188 [2024-11-20 07:43:33.302095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.188 [2024-11-20 07:43:33.302125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.188 [2024-11-20 07:43:33.302134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.188 [2024-11-20 07:43:33.302299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.188 [2024-11-20 07:43:33.302451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.188 [2024-11-20 07:43:33.302458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.188 [2024-11-20 07:43:33.302464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.188 [2024-11-20 07:43:33.302471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.188 [2024-11-20 07:43:33.314224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.188 [2024-11-20 07:43:33.314824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.188 [2024-11-20 07:43:33.314855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.188 [2024-11-20 07:43:33.314864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.188 [2024-11-20 07:43:33.315031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.188 [2024-11-20 07:43:33.315184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.188 [2024-11-20 07:43:33.315190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.188 [2024-11-20 07:43:33.315195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.188 [2024-11-20 07:43:33.315204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.188 [2024-11-20 07:43:33.326828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.188 [2024-11-20 07:43:33.327414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.188 [2024-11-20 07:43:33.327444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.188 [2024-11-20 07:43:33.327453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.189 [2024-11-20 07:43:33.327617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.189 [2024-11-20 07:43:33.327775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.189 [2024-11-20 07:43:33.327783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.189 [2024-11-20 07:43:33.327788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.189 [2024-11-20 07:43:33.327794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.189 [2024-11-20 07:43:33.339471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.189 [2024-11-20 07:43:33.339851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.189 [2024-11-20 07:43:33.339867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.189 [2024-11-20 07:43:33.339873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.189 [2024-11-20 07:43:33.340023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.189 [2024-11-20 07:43:33.340171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.189 [2024-11-20 07:43:33.340178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.189 [2024-11-20 07:43:33.340183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.189 [2024-11-20 07:43:33.340188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.189 [2024-11-20 07:43:33.352064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.189 [2024-11-20 07:43:33.352559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.189 [2024-11-20 07:43:33.352572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.189 [2024-11-20 07:43:33.352577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.189 [2024-11-20 07:43:33.352725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.189 [2024-11-20 07:43:33.352879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.189 [2024-11-20 07:43:33.352886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.189 [2024-11-20 07:43:33.352891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.189 [2024-11-20 07:43:33.352896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.189 [2024-11-20 07:43:33.364777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.189 [2024-11-20 07:43:33.365363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.189 [2024-11-20 07:43:33.365393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.189 [2024-11-20 07:43:33.365402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.189 [2024-11-20 07:43:33.365567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.189 [2024-11-20 07:43:33.365719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.189 [2024-11-20 07:43:33.365726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.189 [2024-11-20 07:43:33.365732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.189 [2024-11-20 07:43:33.365738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.189 [2024-11-20 07:43:33.377494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.189 [2024-11-20 07:43:33.377882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.189 [2024-11-20 07:43:33.377913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.189 [2024-11-20 07:43:33.377923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.189 [2024-11-20 07:43:33.378088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.189 [2024-11-20 07:43:33.378240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.189 [2024-11-20 07:43:33.378247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.189 [2024-11-20 07:43:33.378253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.189 [2024-11-20 07:43:33.378259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.189 [2024-11-20 07:43:33.390158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.189 [2024-11-20 07:43:33.390523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.189 [2024-11-20 07:43:33.390537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.189 [2024-11-20 07:43:33.390543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.189 [2024-11-20 07:43:33.390692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.189 [2024-11-20 07:43:33.390846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.189 [2024-11-20 07:43:33.390853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.189 [2024-11-20 07:43:33.390858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.189 [2024-11-20 07:43:33.390863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.451 [2024-11-20 07:43:33.402751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.451 [2024-11-20 07:43:33.403292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.451 [2024-11-20 07:43:33.403322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.451 [2024-11-20 07:43:33.403330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.451 [2024-11-20 07:43:33.403499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.451 [2024-11-20 07:43:33.403651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.451 [2024-11-20 07:43:33.403658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.451 [2024-11-20 07:43:33.403663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.451 [2024-11-20 07:43:33.403669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.451 [2024-11-20 07:43:33.415427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.451 [2024-11-20 07:43:33.415987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.451 [2024-11-20 07:43:33.416017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.451 [2024-11-20 07:43:33.416026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.451 [2024-11-20 07:43:33.416191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.451 [2024-11-20 07:43:33.416344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.451 [2024-11-20 07:43:33.416350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.451 [2024-11-20 07:43:33.416356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.451 [2024-11-20 07:43:33.416361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.451 [2024-11-20 07:43:33.428122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.451 [2024-11-20 07:43:33.428668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.451 [2024-11-20 07:43:33.428698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.451 [2024-11-20 07:43:33.428707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.451 [2024-11-20 07:43:33.428880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.451 [2024-11-20 07:43:33.429033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.451 [2024-11-20 07:43:33.429040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.451 [2024-11-20 07:43:33.429045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.451 [2024-11-20 07:43:33.429051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.452 [2024-11-20 07:43:33.440795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.452 [2024-11-20 07:43:33.441367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.452 [2024-11-20 07:43:33.441397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.452 [2024-11-20 07:43:33.441406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.452 [2024-11-20 07:43:33.441571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.452 [2024-11-20 07:43:33.441723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.452 [2024-11-20 07:43:33.441734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.452 [2024-11-20 07:43:33.441739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.452 [2024-11-20 07:43:33.441752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.452 [2024-11-20 07:43:33.453496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.452 [2024-11-20 07:43:33.454073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.452 [2024-11-20 07:43:33.454103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.452 [2024-11-20 07:43:33.454111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.452 [2024-11-20 07:43:33.454276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.452 [2024-11-20 07:43:33.454428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.452 [2024-11-20 07:43:33.454434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.452 [2024-11-20 07:43:33.454440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.452 [2024-11-20 07:43:33.454446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.452 [2024-11-20 07:43:33.466193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.452 [2024-11-20 07:43:33.466765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.452 [2024-11-20 07:43:33.466794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.452 [2024-11-20 07:43:33.466803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.452 [2024-11-20 07:43:33.466970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.452 [2024-11-20 07:43:33.467122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.452 [2024-11-20 07:43:33.467128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.452 [2024-11-20 07:43:33.467133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.452 [2024-11-20 07:43:33.467139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.452 [2024-11-20 07:43:33.478888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.452 [2024-11-20 07:43:33.479502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.452 [2024-11-20 07:43:33.479532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.452 [2024-11-20 07:43:33.479540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.452 [2024-11-20 07:43:33.479705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.452 [2024-11-20 07:43:33.479866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.452 [2024-11-20 07:43:33.479873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.452 [2024-11-20 07:43:33.479879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.452 [2024-11-20 07:43:33.479888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.452 [2024-11-20 07:43:33.491494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.452 [2024-11-20 07:43:33.492058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.452 [2024-11-20 07:43:33.492088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.452 [2024-11-20 07:43:33.492097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.452 [2024-11-20 07:43:33.492261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.452 [2024-11-20 07:43:33.492414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.452 [2024-11-20 07:43:33.492420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.452 [2024-11-20 07:43:33.492426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.452 [2024-11-20 07:43:33.492431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.452 [2024-11-20 07:43:33.504186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.452 [2024-11-20 07:43:33.504775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.452 [2024-11-20 07:43:33.504805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.452 [2024-11-20 07:43:33.504814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.452 [2024-11-20 07:43:33.504981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.452 [2024-11-20 07:43:33.505133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.452 [2024-11-20 07:43:33.505140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.452 [2024-11-20 07:43:33.505145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.452 [2024-11-20 07:43:33.505151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.452 [2024-11-20 07:43:33.516767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.452 [2024-11-20 07:43:33.517314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.452 [2024-11-20 07:43:33.517344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.452 [2024-11-20 07:43:33.517352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.452 [2024-11-20 07:43:33.517517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.452 [2024-11-20 07:43:33.517669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.452 [2024-11-20 07:43:33.517675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.452 [2024-11-20 07:43:33.517681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.452 [2024-11-20 07:43:33.517687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.452 [2024-11-20 07:43:33.529442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.452 [2024-11-20 07:43:33.530024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.452 [2024-11-20 07:43:33.530054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.452 [2024-11-20 07:43:33.530063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.452 [2024-11-20 07:43:33.530230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.452 [2024-11-20 07:43:33.530382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.452 [2024-11-20 07:43:33.530389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.452 [2024-11-20 07:43:33.530394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.452 [2024-11-20 07:43:33.530400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.452 [2024-11-20 07:43:33.542154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.452 [2024-11-20 07:43:33.542634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.452 [2024-11-20 07:43:33.542649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.452 [2024-11-20 07:43:33.542655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.452 [2024-11-20 07:43:33.542809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.452 [2024-11-20 07:43:33.542958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.452 [2024-11-20 07:43:33.542964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.452 [2024-11-20 07:43:33.542969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.452 [2024-11-20 07:43:33.542974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.453 [2024-11-20 07:43:33.554849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.453 [2024-11-20 07:43:33.555435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.453 [2024-11-20 07:43:33.555466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.453 [2024-11-20 07:43:33.555475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.453 [2024-11-20 07:43:33.555642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.453 [2024-11-20 07:43:33.555801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.453 [2024-11-20 07:43:33.555815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.453 [2024-11-20 07:43:33.555820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.453 [2024-11-20 07:43:33.555826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.453 [2024-11-20 07:43:33.567439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.453 [2024-11-20 07:43:33.567857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.453 [2024-11-20 07:43:33.567887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.453 [2024-11-20 07:43:33.567897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.453 [2024-11-20 07:43:33.568069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.453 [2024-11-20 07:43:33.568221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.453 [2024-11-20 07:43:33.568228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.453 [2024-11-20 07:43:33.568233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.453 [2024-11-20 07:43:33.568240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.453 [2024-11-20 07:43:33.580139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.453 [2024-11-20 07:43:33.580612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.453 [2024-11-20 07:43:33.580642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.453 [2024-11-20 07:43:33.580651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.453 [2024-11-20 07:43:33.580823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.453 [2024-11-20 07:43:33.580976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.453 [2024-11-20 07:43:33.580982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.453 [2024-11-20 07:43:33.580988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.453 [2024-11-20 07:43:33.580994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.453 5568.20 IOPS, 21.75 MiB/s [2024-11-20T06:43:33.663Z] [2024-11-20 07:43:33.593895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.453 [2024-11-20 07:43:33.594403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.453 [2024-11-20 07:43:33.594433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.453 [2024-11-20 07:43:33.594442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.453 [2024-11-20 07:43:33.594607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.453 [2024-11-20 07:43:33.594764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.453 [2024-11-20 07:43:33.594771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.453 [2024-11-20 07:43:33.594777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.453 [2024-11-20 07:43:33.594783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.453 [2024-11-20 07:43:33.606542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.453 [2024-11-20 07:43:33.606951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.453 [2024-11-20 07:43:33.606967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.453 [2024-11-20 07:43:33.606972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.453 [2024-11-20 07:43:33.607122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.453 [2024-11-20 07:43:33.607275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.453 [2024-11-20 07:43:33.607281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.453 [2024-11-20 07:43:33.607286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.453 [2024-11-20 07:43:33.607291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.453 [2024-11-20 07:43:33.619190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.453 [2024-11-20 07:43:33.619766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.453 [2024-11-20 07:43:33.619796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.453 [2024-11-20 07:43:33.619805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.453 [2024-11-20 07:43:33.619972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.453 [2024-11-20 07:43:33.620125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.453 [2024-11-20 07:43:33.620131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.453 [2024-11-20 07:43:33.620137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.453 [2024-11-20 07:43:33.620143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.453 [2024-11-20 07:43:33.631795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.453 [2024-11-20 07:43:33.632297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.453 [2024-11-20 07:43:33.632312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.453 [2024-11-20 07:43:33.632317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.453 [2024-11-20 07:43:33.632466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.453 [2024-11-20 07:43:33.632615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.453 [2024-11-20 07:43:33.632621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.453 [2024-11-20 07:43:33.632626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.453 [2024-11-20 07:43:33.632630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.453 [2024-11-20 07:43:33.644373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.453 [2024-11-20 07:43:33.644865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.453 [2024-11-20 07:43:33.644895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.453 [2024-11-20 07:43:33.644905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.453 [2024-11-20 07:43:33.645073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.453 [2024-11-20 07:43:33.645226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.453 [2024-11-20 07:43:33.645232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.453 [2024-11-20 07:43:33.645238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.453 [2024-11-20 07:43:33.645247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.716 [2024-11-20 07:43:33.657009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.716 [2024-11-20 07:43:33.657464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-11-20 07:43:33.657479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.716 [2024-11-20 07:43:33.657485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.716 [2024-11-20 07:43:33.657634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.716 [2024-11-20 07:43:33.657789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.716 [2024-11-20 07:43:33.657796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.716 [2024-11-20 07:43:33.657801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.716 [2024-11-20 07:43:33.657806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.716 [2024-11-20 07:43:33.669704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.716 [2024-11-20 07:43:33.670175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-11-20 07:43:33.670189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.716 [2024-11-20 07:43:33.670194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.716 [2024-11-20 07:43:33.670342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.716 [2024-11-20 07:43:33.670491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.716 [2024-11-20 07:43:33.670497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.716 [2024-11-20 07:43:33.670502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.716 [2024-11-20 07:43:33.670507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.716 [2024-11-20 07:43:33.682390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.716 [2024-11-20 07:43:33.682882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-11-20 07:43:33.682912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.716 [2024-11-20 07:43:33.682921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.716 [2024-11-20 07:43:33.683088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.716 [2024-11-20 07:43:33.683240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.716 [2024-11-20 07:43:33.683246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.716 [2024-11-20 07:43:33.683252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.716 [2024-11-20 07:43:33.683258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.716 [2024-11-20 07:43:33.695010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.716 [2024-11-20 07:43:33.695672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-11-20 07:43:33.695702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.716 [2024-11-20 07:43:33.695710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.716 [2024-11-20 07:43:33.695883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.716 [2024-11-20 07:43:33.696043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.716 [2024-11-20 07:43:33.696051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.716 [2024-11-20 07:43:33.696056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.716 [2024-11-20 07:43:33.696062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.716 [2024-11-20 07:43:33.707668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.716 [2024-11-20 07:43:33.708164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-11-20 07:43:33.708179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.716 [2024-11-20 07:43:33.708185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.716 [2024-11-20 07:43:33.708335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.716 [2024-11-20 07:43:33.708484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.716 [2024-11-20 07:43:33.708490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.716 [2024-11-20 07:43:33.708495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.716 [2024-11-20 07:43:33.708500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.716 [2024-11-20 07:43:33.720382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.716 [2024-11-20 07:43:33.720789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-11-20 07:43:33.720809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.716 [2024-11-20 07:43:33.720815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.716 [2024-11-20 07:43:33.720970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.716 [2024-11-20 07:43:33.721120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.716 [2024-11-20 07:43:33.721126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.716 [2024-11-20 07:43:33.721131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.716 [2024-11-20 07:43:33.721137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.716 [2024-11-20 07:43:33.733059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.716 [2024-11-20 07:43:33.733614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-11-20 07:43:33.733645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.716 [2024-11-20 07:43:33.733657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.716 [2024-11-20 07:43:33.733828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.716 [2024-11-20 07:43:33.733981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.716 [2024-11-20 07:43:33.733988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.716 [2024-11-20 07:43:33.733993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.716 [2024-11-20 07:43:33.733999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.716 [2024-11-20 07:43:33.745749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.716 [2024-11-20 07:43:33.746259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.716 [2024-11-20 07:43:33.746289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.716 [2024-11-20 07:43:33.746297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.716 [2024-11-20 07:43:33.746462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.717 [2024-11-20 07:43:33.746614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.717 [2024-11-20 07:43:33.746621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.717 [2024-11-20 07:43:33.746627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.717 [2024-11-20 07:43:33.746633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.717 [2024-11-20 07:43:33.758383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.717 [2024-11-20 07:43:33.758862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-11-20 07:43:33.758892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.717 [2024-11-20 07:43:33.758901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.717 [2024-11-20 07:43:33.759068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.717 [2024-11-20 07:43:33.759220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.717 [2024-11-20 07:43:33.759227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.717 [2024-11-20 07:43:33.759233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.717 [2024-11-20 07:43:33.759239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.717 [2024-11-20 07:43:33.770993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.717 [2024-11-20 07:43:33.771516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-11-20 07:43:33.771531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.717 [2024-11-20 07:43:33.771537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.717 [2024-11-20 07:43:33.771686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.717 [2024-11-20 07:43:33.771844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.717 [2024-11-20 07:43:33.771851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.717 [2024-11-20 07:43:33.771856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.717 [2024-11-20 07:43:33.771861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.717 [2024-11-20 07:43:33.783605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.717 [2024-11-20 07:43:33.784108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-11-20 07:43:33.784139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.717 [2024-11-20 07:43:33.784148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.717 [2024-11-20 07:43:33.784312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.717 [2024-11-20 07:43:33.784465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.717 [2024-11-20 07:43:33.784472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.717 [2024-11-20 07:43:33.784477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.717 [2024-11-20 07:43:33.784483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.717 [2024-11-20 07:43:33.796253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.717 [2024-11-20 07:43:33.796828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-11-20 07:43:33.796861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.717 [2024-11-20 07:43:33.796870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.717 [2024-11-20 07:43:33.797038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.717 [2024-11-20 07:43:33.797190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.717 [2024-11-20 07:43:33.797196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.717 [2024-11-20 07:43:33.797201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.717 [2024-11-20 07:43:33.797207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.717 [2024-11-20 07:43:33.808953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.717 [2024-11-20 07:43:33.809428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-11-20 07:43:33.809444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.717 [2024-11-20 07:43:33.809450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.717 [2024-11-20 07:43:33.809600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.717 [2024-11-20 07:43:33.809753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.717 [2024-11-20 07:43:33.809760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.717 [2024-11-20 07:43:33.809765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.717 [2024-11-20 07:43:33.809773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.717 [2024-11-20 07:43:33.821667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.717 [2024-11-20 07:43:33.822188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-11-20 07:43:33.822202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.717 [2024-11-20 07:43:33.822208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.717 [2024-11-20 07:43:33.822357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.717 [2024-11-20 07:43:33.822506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.717 [2024-11-20 07:43:33.822513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.717 [2024-11-20 07:43:33.822519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.717 [2024-11-20 07:43:33.822524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.717 [2024-11-20 07:43:33.834275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.717 [2024-11-20 07:43:33.834722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-11-20 07:43:33.834734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.717 [2024-11-20 07:43:33.834739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.717 [2024-11-20 07:43:33.834893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.717 [2024-11-20 07:43:33.835042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.717 [2024-11-20 07:43:33.835048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.717 [2024-11-20 07:43:33.835053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.717 [2024-11-20 07:43:33.835058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.717 [2024-11-20 07:43:33.846959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.717 [2024-11-20 07:43:33.847448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-11-20 07:43:33.847461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.717 [2024-11-20 07:43:33.847466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.717 [2024-11-20 07:43:33.847614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.717 [2024-11-20 07:43:33.847769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.717 [2024-11-20 07:43:33.847776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.717 [2024-11-20 07:43:33.847781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.717 [2024-11-20 07:43:33.847785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.717 [2024-11-20 07:43:33.859532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.717 [2024-11-20 07:43:33.859998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-11-20 07:43:33.860011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.717 [2024-11-20 07:43:33.860016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.717 [2024-11-20 07:43:33.860164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.717 [2024-11-20 07:43:33.860313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.717 [2024-11-20 07:43:33.860319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.717 [2024-11-20 07:43:33.860324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.717 [2024-11-20 07:43:33.860329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.717 [2024-11-20 07:43:33.872222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.717 [2024-11-20 07:43:33.872709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.717 [2024-11-20 07:43:33.872722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.717 [2024-11-20 07:43:33.872727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.718 [2024-11-20 07:43:33.872880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.718 [2024-11-20 07:43:33.873030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.718 [2024-11-20 07:43:33.873036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.718 [2024-11-20 07:43:33.873041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.718 [2024-11-20 07:43:33.873045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.718 [2024-11-20 07:43:33.884934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.718 [2024-11-20 07:43:33.885371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-11-20 07:43:33.885401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.718 [2024-11-20 07:43:33.885410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.718 [2024-11-20 07:43:33.885575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.718 [2024-11-20 07:43:33.885727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.718 [2024-11-20 07:43:33.885734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.718 [2024-11-20 07:43:33.885739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.718 [2024-11-20 07:43:33.885752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.718 [2024-11-20 07:43:33.897523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.718 [2024-11-20 07:43:33.898097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-11-20 07:43:33.898127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.718 [2024-11-20 07:43:33.898139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.718 [2024-11-20 07:43:33.898304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.718 [2024-11-20 07:43:33.898456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.718 [2024-11-20 07:43:33.898463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.718 [2024-11-20 07:43:33.898468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.718 [2024-11-20 07:43:33.898474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.718 [2024-11-20 07:43:33.910240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.718 [2024-11-20 07:43:33.910808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.718 [2024-11-20 07:43:33.910839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.718 [2024-11-20 07:43:33.910848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.718 [2024-11-20 07:43:33.911013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.718 [2024-11-20 07:43:33.911166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.718 [2024-11-20 07:43:33.911172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.718 [2024-11-20 07:43:33.911177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.718 [2024-11-20 07:43:33.911184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.980 [2024-11-20 07:43:33.922956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.980 [2024-11-20 07:43:33.923450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.980 [2024-11-20 07:43:33.923465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.980 [2024-11-20 07:43:33.923470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.980 [2024-11-20 07:43:33.923619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.980 [2024-11-20 07:43:33.923773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.980 [2024-11-20 07:43:33.923779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.980 [2024-11-20 07:43:33.923784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.980 [2024-11-20 07:43:33.923789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.980 [2024-11-20 07:43:33.935538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.980 [2024-11-20 07:43:33.935968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.980 [2024-11-20 07:43:33.935998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.980 [2024-11-20 07:43:33.936006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.980 [2024-11-20 07:43:33.936171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.980 [2024-11-20 07:43:33.936328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.980 [2024-11-20 07:43:33.936334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.980 [2024-11-20 07:43:33.936340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.980 [2024-11-20 07:43:33.936345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.981 [2024-11-20 07:43:33.948243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.981 [2024-11-20 07:43:33.948797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.981 [2024-11-20 07:43:33.948827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.981 [2024-11-20 07:43:33.948837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.981 [2024-11-20 07:43:33.949004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.981 [2024-11-20 07:43:33.949156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.981 [2024-11-20 07:43:33.949162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.981 [2024-11-20 07:43:33.949168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.981 [2024-11-20 07:43:33.949174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.981 [2024-11-20 07:43:33.960925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.981 [2024-11-20 07:43:33.961414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.981 [2024-11-20 07:43:33.961429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.981 [2024-11-20 07:43:33.961435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.981 [2024-11-20 07:43:33.961584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.981 [2024-11-20 07:43:33.961733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.981 [2024-11-20 07:43:33.961739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.981 [2024-11-20 07:43:33.961751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.981 [2024-11-20 07:43:33.961758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.981 [2024-11-20 07:43:33.973509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.981 [2024-11-20 07:43:33.974137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.981 [2024-11-20 07:43:33.974168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.981 [2024-11-20 07:43:33.974177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.981 [2024-11-20 07:43:33.974342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.981 [2024-11-20 07:43:33.974494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.981 [2024-11-20 07:43:33.974500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.981 [2024-11-20 07:43:33.974506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.981 [2024-11-20 07:43:33.974516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.981 [2024-11-20 07:43:33.986131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.981 [2024-11-20 07:43:33.986704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.981 [2024-11-20 07:43:33.986734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.981 [2024-11-20 07:43:33.986742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.981 [2024-11-20 07:43:33.986915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.981 [2024-11-20 07:43:33.987067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.981 [2024-11-20 07:43:33.987074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.981 [2024-11-20 07:43:33.987079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.981 [2024-11-20 07:43:33.987085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.981 [2024-11-20 07:43:33.998842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.981 [2024-11-20 07:43:33.999275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.981 [2024-11-20 07:43:33.999290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.981 [2024-11-20 07:43:33.999296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.981 [2024-11-20 07:43:33.999445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.981 [2024-11-20 07:43:33.999594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.981 [2024-11-20 07:43:33.999600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.981 [2024-11-20 07:43:33.999605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.981 [2024-11-20 07:43:33.999610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.981 [2024-11-20 07:43:34.011500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.981 [2024-11-20 07:43:34.011986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.981 [2024-11-20 07:43:34.012016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.981 [2024-11-20 07:43:34.012025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.981 [2024-11-20 07:43:34.012192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.981 [2024-11-20 07:43:34.012344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.981 [2024-11-20 07:43:34.012351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.981 [2024-11-20 07:43:34.012356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.981 [2024-11-20 07:43:34.012362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.981 [2024-11-20 07:43:34.024131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.981 [2024-11-20 07:43:34.024624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.981 [2024-11-20 07:43:34.024639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.981 [2024-11-20 07:43:34.024644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.981 [2024-11-20 07:43:34.024797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.981 [2024-11-20 07:43:34.024947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.981 [2024-11-20 07:43:34.024953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.981 [2024-11-20 07:43:34.024958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.981 [2024-11-20 07:43:34.024963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.981 [2024-11-20 07:43:34.036709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.981 [2024-11-20 07:43:34.037296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.981 [2024-11-20 07:43:34.037327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.981 [2024-11-20 07:43:34.037335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.981 [2024-11-20 07:43:34.037500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.981 [2024-11-20 07:43:34.037652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.981 [2024-11-20 07:43:34.037659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.981 [2024-11-20 07:43:34.037664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.981 [2024-11-20 07:43:34.037670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.981 [2024-11-20 07:43:34.049425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.981 [2024-11-20 07:43:34.050063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.981 [2024-11-20 07:43:34.050094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.981 [2024-11-20 07:43:34.050102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.981 [2024-11-20 07:43:34.050267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.981 [2024-11-20 07:43:34.050419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.981 [2024-11-20 07:43:34.050425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.981 [2024-11-20 07:43:34.050431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.981 [2024-11-20 07:43:34.050437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.981 [2024-11-20 07:43:34.062053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.981 [2024-11-20 07:43:34.062637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.981 [2024-11-20 07:43:34.062668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.981 [2024-11-20 07:43:34.062679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.981 [2024-11-20 07:43:34.062852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.981 [2024-11-20 07:43:34.063006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.981 [2024-11-20 07:43:34.063013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.981 [2024-11-20 07:43:34.063019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.982 [2024-11-20 07:43:34.063026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.982 [2024-11-20 07:43:34.074634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.982 [2024-11-20 07:43:34.075656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.982 [2024-11-20 07:43:34.075677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.982 [2024-11-20 07:43:34.075684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.982 [2024-11-20 07:43:34.075844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.982 [2024-11-20 07:43:34.076001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.982 [2024-11-20 07:43:34.076008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.982 [2024-11-20 07:43:34.076013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.982 [2024-11-20 07:43:34.076018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.982 [2024-11-20 07:43:34.087342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.982 [2024-11-20 07:43:34.087859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.982 [2024-11-20 07:43:34.087873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.982 [2024-11-20 07:43:34.087878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.982 [2024-11-20 07:43:34.088027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.982 [2024-11-20 07:43:34.088176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.982 [2024-11-20 07:43:34.088182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.982 [2024-11-20 07:43:34.088187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.982 [2024-11-20 07:43:34.088192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.982 [2024-11-20 07:43:34.099955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.982 [2024-11-20 07:43:34.100423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.982 [2024-11-20 07:43:34.100437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.982 [2024-11-20 07:43:34.100442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.982 [2024-11-20 07:43:34.100590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.982 [2024-11-20 07:43:34.100739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.982 [2024-11-20 07:43:34.100752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.982 [2024-11-20 07:43:34.100757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.982 [2024-11-20 07:43:34.100762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.982 [2024-11-20 07:43:34.112633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.982 [2024-11-20 07:43:34.113152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.982 [2024-11-20 07:43:34.113165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.982 [2024-11-20 07:43:34.113170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.982 [2024-11-20 07:43:34.113319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.982 [2024-11-20 07:43:34.113468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.982 [2024-11-20 07:43:34.113474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.982 [2024-11-20 07:43:34.113479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.982 [2024-11-20 07:43:34.113483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.982 [2024-11-20 07:43:34.125228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.982 [2024-11-20 07:43:34.125679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.982 [2024-11-20 07:43:34.125691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.982 [2024-11-20 07:43:34.125696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.982 [2024-11-20 07:43:34.125848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.982 [2024-11-20 07:43:34.125997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.982 [2024-11-20 07:43:34.126003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.982 [2024-11-20 07:43:34.126008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.982 [2024-11-20 07:43:34.126013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.982 [2024-11-20 07:43:34.137912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.982 [2024-11-20 07:43:34.138395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.982 [2024-11-20 07:43:34.138408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.982 [2024-11-20 07:43:34.138413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.982 [2024-11-20 07:43:34.138561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.982 [2024-11-20 07:43:34.138710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.982 [2024-11-20 07:43:34.138716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.982 [2024-11-20 07:43:34.138721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.982 [2024-11-20 07:43:34.138728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.982 [2024-11-20 07:43:34.150499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.982 [2024-11-20 07:43:34.150938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.982 [2024-11-20 07:43:34.150951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.982 [2024-11-20 07:43:34.150956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.982 [2024-11-20 07:43:34.151104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.982 [2024-11-20 07:43:34.151253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.982 [2024-11-20 07:43:34.151259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.982 [2024-11-20 07:43:34.151264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.982 [2024-11-20 07:43:34.151268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.982 [2024-11-20 07:43:34.163163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.982 [2024-11-20 07:43:34.163487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.982 [2024-11-20 07:43:34.163500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.982 [2024-11-20 07:43:34.163506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.982 [2024-11-20 07:43:34.163654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.982 [2024-11-20 07:43:34.163808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.982 [2024-11-20 07:43:34.163814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.982 [2024-11-20 07:43:34.163819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.982 [2024-11-20 07:43:34.163824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.982 [2024-11-20 07:43:34.175874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.982 [2024-11-20 07:43:34.176454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.982 [2024-11-20 07:43:34.176484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:15.982 [2024-11-20 07:43:34.176493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:15.982 [2024-11-20 07:43:34.176660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:15.982 [2024-11-20 07:43:34.176821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.982 [2024-11-20 07:43:34.176828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.982 [2024-11-20 07:43:34.176833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.982 [2024-11-20 07:43:34.176839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.244 [2024-11-20 07:43:34.188459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.244 [2024-11-20 07:43:34.188960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.244 [2024-11-20 07:43:34.188975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.244 [2024-11-20 07:43:34.188981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.244 [2024-11-20 07:43:34.189130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.244 [2024-11-20 07:43:34.189280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.244 [2024-11-20 07:43:34.189286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.245 [2024-11-20 07:43:34.189291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.245 [2024-11-20 07:43:34.189295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.245 [2024-11-20 07:43:34.201068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.245 [2024-11-20 07:43:34.201555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.245 [2024-11-20 07:43:34.201568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.245 [2024-11-20 07:43:34.201573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.245 [2024-11-20 07:43:34.201722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.245 [2024-11-20 07:43:34.201876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.245 [2024-11-20 07:43:34.201882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.245 [2024-11-20 07:43:34.201887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.245 [2024-11-20 07:43:34.201891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.245 [2024-11-20 07:43:34.213660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.245 [2024-11-20 07:43:34.214226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.245 [2024-11-20 07:43:34.214256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.245 [2024-11-20 07:43:34.214265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.245 [2024-11-20 07:43:34.214430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.245 [2024-11-20 07:43:34.214582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.245 [2024-11-20 07:43:34.214589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.245 [2024-11-20 07:43:34.214594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.245 [2024-11-20 07:43:34.214600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.245 [2024-11-20 07:43:34.226382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.245 [2024-11-20 07:43:34.226848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.245 [2024-11-20 07:43:34.226863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.245 [2024-11-20 07:43:34.226872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.245 [2024-11-20 07:43:34.227021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.245 [2024-11-20 07:43:34.227171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.245 [2024-11-20 07:43:34.227177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.245 [2024-11-20 07:43:34.227182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.245 [2024-11-20 07:43:34.227187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.245 [2024-11-20 07:43:34.239078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.245 [2024-11-20 07:43:34.239526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.245 [2024-11-20 07:43:34.239539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.245 [2024-11-20 07:43:34.239545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.245 [2024-11-20 07:43:34.239693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.245 [2024-11-20 07:43:34.239847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.245 [2024-11-20 07:43:34.239853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.245 [2024-11-20 07:43:34.239858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.245 [2024-11-20 07:43:34.239863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.245 [2024-11-20 07:43:34.251752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.245 [2024-11-20 07:43:34.252230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.245 [2024-11-20 07:43:34.252243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.245 [2024-11-20 07:43:34.252249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.245 [2024-11-20 07:43:34.252397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.245 [2024-11-20 07:43:34.252546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.245 [2024-11-20 07:43:34.252552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.245 [2024-11-20 07:43:34.252557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.245 [2024-11-20 07:43:34.252561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3582010 Killed "${NVMF_APP[@]}" "$@" 00:29:16.245 07:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:16.245 07:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:16.245 07:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:16.245 07:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:16.245 07:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.245 [2024-11-20 07:43:34.264329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.245 [2024-11-20 07:43:34.264777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.245 [2024-11-20 07:43:34.264790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.245 [2024-11-20 07:43:34.264795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.245 [2024-11-20 07:43:34.264943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.245 [2024-11-20 07:43:34.265092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.245 [2024-11-20 07:43:34.265098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.245 [2024-11-20 07:43:34.265103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.245 [2024-11-20 07:43:34.265108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.245 07:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3583709 00:29:16.245 07:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3583709 00:29:16.245 07:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:16.245 07:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3583709 ']' 00:29:16.245 07:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.245 07:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:16.245 07:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.245 07:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:16.245 07:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.245 [2024-11-20 07:43:34.277008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.245 [2024-11-20 07:43:34.277451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.245 [2024-11-20 07:43:34.277463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.245 [2024-11-20 07:43:34.277469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.245 [2024-11-20 07:43:34.277617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.245 [2024-11-20 07:43:34.277770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.245 [2024-11-20 07:43:34.277776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.245 [2024-11-20 07:43:34.277782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.245 [2024-11-20 07:43:34.277787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.245 [2024-11-20 07:43:34.289661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.245 [2024-11-20 07:43:34.290224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.245 [2024-11-20 07:43:34.290254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.245 [2024-11-20 07:43:34.290263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.245 [2024-11-20 07:43:34.290428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.245 [2024-11-20 07:43:34.290583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.245 [2024-11-20 07:43:34.290590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.245 [2024-11-20 07:43:34.290595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.245 [2024-11-20 07:43:34.290602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.246 [2024-11-20 07:43:34.302376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.246 [2024-11-20 07:43:34.302858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.246 [2024-11-20 07:43:34.302874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.246 [2024-11-20 07:43:34.302880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.246 [2024-11-20 07:43:34.303029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.246 [2024-11-20 07:43:34.303178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.246 [2024-11-20 07:43:34.303184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.246 [2024-11-20 07:43:34.303189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.246 [2024-11-20 07:43:34.303194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.246 [2024-11-20 07:43:34.315094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.246 [2024-11-20 07:43:34.315451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.246 [2024-11-20 07:43:34.315463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.246 [2024-11-20 07:43:34.315469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.246 [2024-11-20 07:43:34.315617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.246 [2024-11-20 07:43:34.315772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.246 [2024-11-20 07:43:34.315779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.246 [2024-11-20 07:43:34.315784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.246 [2024-11-20 07:43:34.315789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.246 [2024-11-20 07:43:34.323403] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:29:16.246 [2024-11-20 07:43:34.323449] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.246 [2024-11-20 07:43:34.327683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.246 [2024-11-20 07:43:34.328217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.246 [2024-11-20 07:43:34.328247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.246 [2024-11-20 07:43:34.328257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.246 [2024-11-20 07:43:34.328426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.246 [2024-11-20 07:43:34.328578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.246 [2024-11-20 07:43:34.328585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.246 [2024-11-20 07:43:34.328590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.246 [2024-11-20 07:43:34.328597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.246 [2024-11-20 07:43:34.340354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.246 [2024-11-20 07:43:34.340867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.246 [2024-11-20 07:43:34.340898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.246 [2024-11-20 07:43:34.340907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.246 [2024-11-20 07:43:34.341072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.246 [2024-11-20 07:43:34.341225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.246 [2024-11-20 07:43:34.341231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.246 [2024-11-20 07:43:34.341237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.246 [2024-11-20 07:43:34.341243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.246 [2024-11-20 07:43:34.353002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.246 [2024-11-20 07:43:34.353556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.246 [2024-11-20 07:43:34.353586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.246 [2024-11-20 07:43:34.353595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.246 [2024-11-20 07:43:34.353766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.246 [2024-11-20 07:43:34.353919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.246 [2024-11-20 07:43:34.353926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.246 [2024-11-20 07:43:34.353931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.246 [2024-11-20 07:43:34.353937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.246 [2024-11-20 07:43:34.365614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.246 [2024-11-20 07:43:34.366160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.246 [2024-11-20 07:43:34.366176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.246 [2024-11-20 07:43:34.366181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.246 [2024-11-20 07:43:34.366331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.246 [2024-11-20 07:43:34.366480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.246 [2024-11-20 07:43:34.366490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.246 [2024-11-20 07:43:34.366495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.246 [2024-11-20 07:43:34.366500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.246 [2024-11-20 07:43:34.378245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.246 [2024-11-20 07:43:34.378704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.246 [2024-11-20 07:43:34.378717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.246 [2024-11-20 07:43:34.378723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.246 [2024-11-20 07:43:34.378877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.246 [2024-11-20 07:43:34.379026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.246 [2024-11-20 07:43:34.379032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.246 [2024-11-20 07:43:34.379037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.246 [2024-11-20 07:43:34.379042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.246 [2024-11-20 07:43:34.390926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.246 [2024-11-20 07:43:34.391412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.246 [2024-11-20 07:43:34.391424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.246 [2024-11-20 07:43:34.391430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.246 [2024-11-20 07:43:34.391579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.246 [2024-11-20 07:43:34.391728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.246 [2024-11-20 07:43:34.391734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.246 [2024-11-20 07:43:34.391740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.246 [2024-11-20 07:43:34.391749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.246 [2024-11-20 07:43:34.403501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.246 [2024-11-20 07:43:34.403885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.246 [2024-11-20 07:43:34.403915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.246 [2024-11-20 07:43:34.403924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.246 [2024-11-20 07:43:34.404091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.246 [2024-11-20 07:43:34.404243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.246 [2024-11-20 07:43:34.404250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.246 [2024-11-20 07:43:34.404255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.246 [2024-11-20 07:43:34.404261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.246 [2024-11-20 07:43:34.413187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:16.246 [2024-11-20 07:43:34.416168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.246 [2024-11-20 07:43:34.416714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.246 [2024-11-20 07:43:34.416750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.246 [2024-11-20 07:43:34.416759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.246 [2024-11-20 07:43:34.416927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.247 [2024-11-20 07:43:34.417079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.247 [2024-11-20 07:43:34.417086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.247 [2024-11-20 07:43:34.417092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.247 [2024-11-20 07:43:34.417098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.247 [2024-11-20 07:43:34.428864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.247 [2024-11-20 07:43:34.429449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.247 [2024-11-20 07:43:34.429479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.247 [2024-11-20 07:43:34.429489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.247 [2024-11-20 07:43:34.429654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.247 [2024-11-20 07:43:34.429814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.247 [2024-11-20 07:43:34.429822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.247 [2024-11-20 07:43:34.429828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.247 [2024-11-20 07:43:34.429834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.247 [2024-11-20 07:43:34.441445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.247 [2024-11-20 07:43:34.442080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.247 [2024-11-20 07:43:34.442111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.247 [2024-11-20 07:43:34.442121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.247 [2024-11-20 07:43:34.442287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.247 [2024-11-20 07:43:34.442440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.247 [2024-11-20 07:43:34.442446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.247 [2024-11-20 07:43:34.442453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.247 [2024-11-20 07:43:34.442460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.247 [2024-11-20 07:43:34.442596] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.247 [2024-11-20 07:43:34.442619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.247 [2024-11-20 07:43:34.442626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.247 [2024-11-20 07:43:34.442632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.247 [2024-11-20 07:43:34.442637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.247 [2024-11-20 07:43:34.443982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:16.247 [2024-11-20 07:43:34.444195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.247 [2024-11-20 07:43:34.444196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:16.509 [2024-11-20 07:43:34.454088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.509 [2024-11-20 07:43:34.454615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.509 [2024-11-20 07:43:34.454631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.509 [2024-11-20 07:43:34.454637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.509 [2024-11-20 07:43:34.454791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.509 [2024-11-20 07:43:34.454942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.509 [2024-11-20 07:43:34.454949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.509 [2024-11-20 07:43:34.454954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.509 [2024-11-20 07:43:34.454960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.509 [2024-11-20 07:43:34.466694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.509 [2024-11-20 07:43:34.467073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.509 [2024-11-20 07:43:34.467087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.509 [2024-11-20 07:43:34.467093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.509 [2024-11-20 07:43:34.467243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.509 [2024-11-20 07:43:34.467392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.509 [2024-11-20 07:43:34.467398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.509 [2024-11-20 07:43:34.467404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.509 [2024-11-20 07:43:34.467409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.509 [2024-11-20 07:43:34.479288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.509 [2024-11-20 07:43:34.479750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.509 [2024-11-20 07:43:34.479763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.509 [2024-11-20 07:43:34.479769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.509 [2024-11-20 07:43:34.479918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.509 [2024-11-20 07:43:34.480067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.509 [2024-11-20 07:43:34.480078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.509 [2024-11-20 07:43:34.480083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.509 [2024-11-20 07:43:34.480088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.509 [2024-11-20 07:43:34.491966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.510 [2024-11-20 07:43:34.492458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.510 [2024-11-20 07:43:34.492470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.510 [2024-11-20 07:43:34.492476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.510 [2024-11-20 07:43:34.492625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.510 [2024-11-20 07:43:34.492779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.510 [2024-11-20 07:43:34.492785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.510 [2024-11-20 07:43:34.492791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.510 [2024-11-20 07:43:34.492796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.510 [2024-11-20 07:43:34.504683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.510 [2024-11-20 07:43:34.505020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.510 [2024-11-20 07:43:34.505035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.510 [2024-11-20 07:43:34.505041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.510 [2024-11-20 07:43:34.505190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.510 [2024-11-20 07:43:34.505339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.510 [2024-11-20 07:43:34.505345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.510 [2024-11-20 07:43:34.505350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.510 [2024-11-20 07:43:34.505355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.510 [2024-11-20 07:43:34.517370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.510 [2024-11-20 07:43:34.517977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.510 [2024-11-20 07:43:34.518009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.510 [2024-11-20 07:43:34.518018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.510 [2024-11-20 07:43:34.518185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.510 [2024-11-20 07:43:34.518337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.510 [2024-11-20 07:43:34.518344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.510 [2024-11-20 07:43:34.518350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.510 [2024-11-20 07:43:34.518360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.510 [2024-11-20 07:43:34.529991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.510 [2024-11-20 07:43:34.530583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.510 [2024-11-20 07:43:34.530613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.510 [2024-11-20 07:43:34.530622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.510 [2024-11-20 07:43:34.530794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.510 [2024-11-20 07:43:34.530947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.510 [2024-11-20 07:43:34.530954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.510 [2024-11-20 07:43:34.530959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.510 [2024-11-20 07:43:34.530965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.510 [2024-11-20 07:43:34.542569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.510 [2024-11-20 07:43:34.542957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.510 [2024-11-20 07:43:34.542988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.510 [2024-11-20 07:43:34.542997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.510 [2024-11-20 07:43:34.543164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.510 [2024-11-20 07:43:34.543317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.510 [2024-11-20 07:43:34.543323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.510 [2024-11-20 07:43:34.543329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.510 [2024-11-20 07:43:34.543335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.510 [2024-11-20 07:43:34.555229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.510 [2024-11-20 07:43:34.555858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.510 [2024-11-20 07:43:34.555888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.510 [2024-11-20 07:43:34.555897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.510 [2024-11-20 07:43:34.556064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.510 [2024-11-20 07:43:34.556216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.510 [2024-11-20 07:43:34.556223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.510 [2024-11-20 07:43:34.556229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.510 [2024-11-20 07:43:34.556234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.510 [2024-11-20 07:43:34.567829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.510 [2024-11-20 07:43:34.568381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.510 [2024-11-20 07:43:34.568415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.510 [2024-11-20 07:43:34.568424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.510 [2024-11-20 07:43:34.568589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.510 [2024-11-20 07:43:34.568742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.510 [2024-11-20 07:43:34.568755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.510 [2024-11-20 07:43:34.568761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.510 [2024-11-20 07:43:34.568767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.510 [2024-11-20 07:43:34.580519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.510 [2024-11-20 07:43:34.581000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.510 [2024-11-20 07:43:34.581016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.510 [2024-11-20 07:43:34.581022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.510 [2024-11-20 07:43:34.581172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.510 [2024-11-20 07:43:34.581321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.510 [2024-11-20 07:43:34.581327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.510 [2024-11-20 07:43:34.581332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.510 [2024-11-20 07:43:34.581337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.510 4640.17 IOPS, 18.13 MiB/s [2024-11-20T06:43:34.720Z] [2024-11-20 07:43:34.594381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.510 [2024-11-20 07:43:34.594871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.510 [2024-11-20 07:43:34.594901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.510 [2024-11-20 07:43:34.594910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.510 [2024-11-20 07:43:34.595078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.510 [2024-11-20 07:43:34.595230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.510 [2024-11-20 07:43:34.595237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.510 [2024-11-20 07:43:34.595243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.510 [2024-11-20 07:43:34.595249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.510 [2024-11-20 07:43:34.607012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.510 [2024-11-20 07:43:34.607606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.510 [2024-11-20 07:43:34.607636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.510 [2024-11-20 07:43:34.607644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.510 [2024-11-20 07:43:34.607819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.510 [2024-11-20 07:43:34.607972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.510 [2024-11-20 07:43:34.607979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.510 [2024-11-20 07:43:34.607985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.511 [2024-11-20 07:43:34.607991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.511 [2024-11-20 07:43:34.619591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.511 [2024-11-20 07:43:34.620168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.511 [2024-11-20 07:43:34.620198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.511 [2024-11-20 07:43:34.620207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.511 [2024-11-20 07:43:34.620372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.511 [2024-11-20 07:43:34.620524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.511 [2024-11-20 07:43:34.620531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.511 [2024-11-20 07:43:34.620537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.511 [2024-11-20 07:43:34.620542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.511 [2024-11-20 07:43:34.632281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.511 [2024-11-20 07:43:34.632845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.511 [2024-11-20 07:43:34.632876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.511 [2024-11-20 07:43:34.632884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.511 [2024-11-20 07:43:34.633052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.511 [2024-11-20 07:43:34.633204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.511 [2024-11-20 07:43:34.633211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.511 [2024-11-20 07:43:34.633216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.511 [2024-11-20 07:43:34.633222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.511 [2024-11-20 07:43:34.644973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.511 [2024-11-20 07:43:34.645571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.511 [2024-11-20 07:43:34.645601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.511 [2024-11-20 07:43:34.645609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.511 [2024-11-20 07:43:34.645779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.511 [2024-11-20 07:43:34.645932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.511 [2024-11-20 07:43:34.645941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.511 [2024-11-20 07:43:34.645947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.511 [2024-11-20 07:43:34.645953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.511 [2024-11-20 07:43:34.657559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.511 [2024-11-20 07:43:34.658170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.511 [2024-11-20 07:43:34.658200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.511 [2024-11-20 07:43:34.658209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.511 [2024-11-20 07:43:34.658374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.511 [2024-11-20 07:43:34.658526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.511 [2024-11-20 07:43:34.658533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.511 [2024-11-20 07:43:34.658538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.511 [2024-11-20 07:43:34.658544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.511 [2024-11-20 07:43:34.670149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.511 [2024-11-20 07:43:34.670496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.511 [2024-11-20 07:43:34.670511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.511 [2024-11-20 07:43:34.670516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.511 [2024-11-20 07:43:34.670665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.511 [2024-11-20 07:43:34.670818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.511 [2024-11-20 07:43:34.670824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.511 [2024-11-20 07:43:34.670829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.511 [2024-11-20 07:43:34.670834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.511 [2024-11-20 07:43:34.682858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.511 [2024-11-20 07:43:34.683329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.511 [2024-11-20 07:43:34.683342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.511 [2024-11-20 07:43:34.683347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.511 [2024-11-20 07:43:34.683496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.511 [2024-11-20 07:43:34.683645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.511 [2024-11-20 07:43:34.683651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.511 [2024-11-20 07:43:34.683656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.511 [2024-11-20 07:43:34.683667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.511 [2024-11-20 07:43:34.695466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.511 [2024-11-20 07:43:34.695993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.511 [2024-11-20 07:43:34.696007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.511 [2024-11-20 07:43:34.696012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.511 [2024-11-20 07:43:34.696161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.511 [2024-11-20 07:43:34.696310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.511 [2024-11-20 07:43:34.696316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.511 [2024-11-20 07:43:34.696321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.511 [2024-11-20 07:43:34.696326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.511 [2024-11-20 07:43:34.708072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.511 [2024-11-20 07:43:34.708543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.511 [2024-11-20 07:43:34.708555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.511 [2024-11-20 07:43:34.708561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.511 [2024-11-20 07:43:34.708710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.511 [2024-11-20 07:43:34.708862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.511 [2024-11-20 07:43:34.708868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.511 [2024-11-20 07:43:34.708873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.511 [2024-11-20 07:43:34.708878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.774 [2024-11-20 07:43:34.720766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.774 [2024-11-20 07:43:34.721319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.774 [2024-11-20 07:43:34.721350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.774 [2024-11-20 07:43:34.721359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.774 [2024-11-20 07:43:34.721526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.774 [2024-11-20 07:43:34.721678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.774 [2024-11-20 07:43:34.721685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.774 [2024-11-20 07:43:34.721690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.774 [2024-11-20 07:43:34.721696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.774 [2024-11-20 07:43:34.733445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.774 [2024-11-20 07:43:34.734111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.774 [2024-11-20 07:43:34.734142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.774 [2024-11-20 07:43:34.734150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.774 [2024-11-20 07:43:34.734315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.774 [2024-11-20 07:43:34.734467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.774 [2024-11-20 07:43:34.734474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.774 [2024-11-20 07:43:34.734479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.774 [2024-11-20 07:43:34.734485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.774 [2024-11-20 07:43:34.746092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.774 [2024-11-20 07:43:34.746593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.774 [2024-11-20 07:43:34.746607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.774 [2024-11-20 07:43:34.746613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.774 [2024-11-20 07:43:34.746766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.774 [2024-11-20 07:43:34.746915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.774 [2024-11-20 07:43:34.746921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.774 [2024-11-20 07:43:34.746926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.774 [2024-11-20 07:43:34.746931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.774 [2024-11-20 07:43:34.758801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.774 [2024-11-20 07:43:34.759151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.774 [2024-11-20 07:43:34.759166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.774 [2024-11-20 07:43:34.759171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.774 [2024-11-20 07:43:34.759320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.774 [2024-11-20 07:43:34.759469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.774 [2024-11-20 07:43:34.759475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.774 [2024-11-20 07:43:34.759480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.774 [2024-11-20 07:43:34.759484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.774 [2024-11-20 07:43:34.771499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.775 [2024-11-20 07:43:34.772058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.775 [2024-11-20 07:43:34.772089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.775 [2024-11-20 07:43:34.772098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.775 [2024-11-20 07:43:34.772266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.775 [2024-11-20 07:43:34.772418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.775 [2024-11-20 07:43:34.772425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.775 [2024-11-20 07:43:34.772430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.775 [2024-11-20 07:43:34.772436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.775 [2024-11-20 07:43:34.784183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.775 [2024-11-20 07:43:34.784781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.775 [2024-11-20 07:43:34.784811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.775 [2024-11-20 07:43:34.784820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.775 [2024-11-20 07:43:34.784987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.775 [2024-11-20 07:43:34.785139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.775 [2024-11-20 07:43:34.785146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.775 [2024-11-20 07:43:34.785151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.775 [2024-11-20 07:43:34.785157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.775 [2024-11-20 07:43:34.796775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.775 [2024-11-20 07:43:34.797367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.775 [2024-11-20 07:43:34.797397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.775 [2024-11-20 07:43:34.797406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.775 [2024-11-20 07:43:34.797572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.775 [2024-11-20 07:43:34.797724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.775 [2024-11-20 07:43:34.797730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.775 [2024-11-20 07:43:34.797736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.775 [2024-11-20 07:43:34.797742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.775 [2024-11-20 07:43:34.809365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.775 [2024-11-20 07:43:34.809777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.775 [2024-11-20 07:43:34.809793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.775 [2024-11-20 07:43:34.809798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.775 [2024-11-20 07:43:34.809948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.775 [2024-11-20 07:43:34.810097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.775 [2024-11-20 07:43:34.810106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.775 [2024-11-20 07:43:34.810111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.775 [2024-11-20 07:43:34.810117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.775 [2024-11-20 07:43:34.822051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.775 [2024-11-20 07:43:34.822266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.775 [2024-11-20 07:43:34.822279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.775 [2024-11-20 07:43:34.822284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.775 [2024-11-20 07:43:34.822433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.775 [2024-11-20 07:43:34.822583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.775 [2024-11-20 07:43:34.822596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.775 [2024-11-20 07:43:34.822601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.775 [2024-11-20 07:43:34.822606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.775 [2024-11-20 07:43:34.834633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.775 [2024-11-20 07:43:34.835117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.775 [2024-11-20 07:43:34.835131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.775 [2024-11-20 07:43:34.835136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.775 [2024-11-20 07:43:34.835285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.775 [2024-11-20 07:43:34.835434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.775 [2024-11-20 07:43:34.835440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.775 [2024-11-20 07:43:34.835445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.775 [2024-11-20 07:43:34.835450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.775 [2024-11-20 07:43:34.847335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.775 [2024-11-20 07:43:34.847769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.775 [2024-11-20 07:43:34.847800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.775 [2024-11-20 07:43:34.847809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.775 [2024-11-20 07:43:34.847976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.775 [2024-11-20 07:43:34.848128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.775 [2024-11-20 07:43:34.848135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.775 [2024-11-20 07:43:34.848141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.775 [2024-11-20 07:43:34.848150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.775 [2024-11-20 07:43:34.859910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.775 [2024-11-20 07:43:34.860419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.775 [2024-11-20 07:43:34.860435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.775 [2024-11-20 07:43:34.860441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.775 [2024-11-20 07:43:34.860590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.775 [2024-11-20 07:43:34.860739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.775 [2024-11-20 07:43:34.860750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.775 [2024-11-20 07:43:34.860756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.775 [2024-11-20 07:43:34.860760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.775 [2024-11-20 07:43:34.872505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.775 [2024-11-20 07:43:34.873153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.775 [2024-11-20 07:43:34.873183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.775 [2024-11-20 07:43:34.873192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.775 [2024-11-20 07:43:34.873357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.775 [2024-11-20 07:43:34.873510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.775 [2024-11-20 07:43:34.873516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.775 [2024-11-20 07:43:34.873522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.775 [2024-11-20 07:43:34.873528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.775 [2024-11-20 07:43:34.885145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.775 [2024-11-20 07:43:34.885696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.775 [2024-11-20 07:43:34.885726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.775 [2024-11-20 07:43:34.885734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.775 [2024-11-20 07:43:34.885906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.775 [2024-11-20 07:43:34.886058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.775 [2024-11-20 07:43:34.886065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.775 [2024-11-20 07:43:34.886071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.775 [2024-11-20 07:43:34.886077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.776 [2024-11-20 07:43:34.897828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.776 [2024-11-20 07:43:34.898307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.776 [2024-11-20 07:43:34.898335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.776 [2024-11-20 07:43:34.898344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.776 [2024-11-20 07:43:34.898508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.776 [2024-11-20 07:43:34.898660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.776 [2024-11-20 07:43:34.898667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.776 [2024-11-20 07:43:34.898672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.776 [2024-11-20 07:43:34.898678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.776 [2024-11-20 07:43:34.910437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.776 [2024-11-20 07:43:34.910919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.776 [2024-11-20 07:43:34.910934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.776 [2024-11-20 07:43:34.910940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.776 [2024-11-20 07:43:34.911089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.776 [2024-11-20 07:43:34.911238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.776 [2024-11-20 07:43:34.911244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.776 [2024-11-20 07:43:34.911249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.776 [2024-11-20 07:43:34.911254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.776 [2024-11-20 07:43:34.923154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.776 [2024-11-20 07:43:34.923609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.776 [2024-11-20 07:43:34.923622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.776 [2024-11-20 07:43:34.923627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.776 [2024-11-20 07:43:34.923779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.776 [2024-11-20 07:43:34.923929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.776 [2024-11-20 07:43:34.923935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.776 [2024-11-20 07:43:34.923940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.776 [2024-11-20 07:43:34.923944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.776 [2024-11-20 07:43:34.935827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.776 [2024-11-20 07:43:34.936158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.776 [2024-11-20 07:43:34.936170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.776 [2024-11-20 07:43:34.936175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.776 [2024-11-20 07:43:34.936327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.776 [2024-11-20 07:43:34.936476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.776 [2024-11-20 07:43:34.936482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.776 [2024-11-20 07:43:34.936487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.776 [2024-11-20 07:43:34.936492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.776 [2024-11-20 07:43:34.948419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.776 [2024-11-20 07:43:34.948760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.776 [2024-11-20 07:43:34.948773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.776 [2024-11-20 07:43:34.948779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.776 [2024-11-20 07:43:34.948927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.776 [2024-11-20 07:43:34.949076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.776 [2024-11-20 07:43:34.949082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.776 [2024-11-20 07:43:34.949086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.776 [2024-11-20 07:43:34.949091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.776 [2024-11-20 07:43:34.961123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.776 [2024-11-20 07:43:34.961680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.776 [2024-11-20 07:43:34.961711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.776 [2024-11-20 07:43:34.961719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.776 [2024-11-20 07:43:34.961892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.776 [2024-11-20 07:43:34.962045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.776 [2024-11-20 07:43:34.962051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.776 [2024-11-20 07:43:34.962057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.776 [2024-11-20 07:43:34.962063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.776 [2024-11-20 07:43:34.973823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.776 [2024-11-20 07:43:34.974287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.776 [2024-11-20 07:43:34.974318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:16.776 [2024-11-20 07:43:34.974326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:16.776 [2024-11-20 07:43:34.974492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:16.776 [2024-11-20 07:43:34.974644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.776 [2024-11-20 07:43:34.974654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.776 [2024-11-20 07:43:34.974660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.776 [2024-11-20 07:43:34.974665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.038 [2024-11-20 07:43:34.986423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.038 [2024-11-20 07:43:34.986917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.038 [2024-11-20 07:43:34.986933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.038 [2024-11-20 07:43:34.986939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.038 [2024-11-20 07:43:34.987088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.038 [2024-11-20 07:43:34.987238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.038 [2024-11-20 07:43:34.987243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.038 [2024-11-20 07:43:34.987248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.038 [2024-11-20 07:43:34.987253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.038 [2024-11-20 07:43:34.999135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.038 [2024-11-20 07:43:34.999607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.038 [2024-11-20 07:43:34.999637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.038 [2024-11-20 07:43:34.999646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.038 [2024-11-20 07:43:34.999817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.038 [2024-11-20 07:43:34.999978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.038 [2024-11-20 07:43:34.999986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.038 [2024-11-20 07:43:34.999991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.038 [2024-11-20 07:43:34.999997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.038 [2024-11-20 07:43:35.011789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.038 [2024-11-20 07:43:35.012301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.038 [2024-11-20 07:43:35.012316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.038 [2024-11-20 07:43:35.012322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.038 [2024-11-20 07:43:35.012471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.039 [2024-11-20 07:43:35.012620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.039 [2024-11-20 07:43:35.012626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.039 [2024-11-20 07:43:35.012631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.039 [2024-11-20 07:43:35.012640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.039 [2024-11-20 07:43:35.024394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.039 [2024-11-20 07:43:35.024948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.039 [2024-11-20 07:43:35.024979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.039 [2024-11-20 07:43:35.024988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.039 [2024-11-20 07:43:35.025153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.039 [2024-11-20 07:43:35.025305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.039 [2024-11-20 07:43:35.025311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.039 [2024-11-20 07:43:35.025317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.039 [2024-11-20 07:43:35.025323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.039 [2024-11-20 07:43:35.037075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.039 [2024-11-20 07:43:35.037661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.039 [2024-11-20 07:43:35.037691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.039 [2024-11-20 07:43:35.037701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.039 [2024-11-20 07:43:35.037875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.039 [2024-11-20 07:43:35.038028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.039 [2024-11-20 07:43:35.038035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.039 [2024-11-20 07:43:35.038040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.039 [2024-11-20 07:43:35.038046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.039 [2024-11-20 07:43:35.049654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.039 [2024-11-20 07:43:35.050229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.039 [2024-11-20 07:43:35.050260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.039 [2024-11-20 07:43:35.050268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.039 [2024-11-20 07:43:35.050433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.039 [2024-11-20 07:43:35.050586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.039 [2024-11-20 07:43:35.050593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.039 [2024-11-20 07:43:35.050599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.039 [2024-11-20 07:43:35.050604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.039 [2024-11-20 07:43:35.062360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.039 [2024-11-20 07:43:35.062869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.039 [2024-11-20 07:43:35.062900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.039 [2024-11-20 07:43:35.062909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.039 [2024-11-20 07:43:35.063077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.039 [2024-11-20 07:43:35.063229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.039 [2024-11-20 07:43:35.063236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.039 [2024-11-20 07:43:35.063241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.039 [2024-11-20 07:43:35.063247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.039 [2024-11-20 07:43:35.075007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.039 [2024-11-20 07:43:35.075583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.039 [2024-11-20 07:43:35.075613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.039 [2024-11-20 07:43:35.075621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.039 [2024-11-20 07:43:35.075793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.039 [2024-11-20 07:43:35.075946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.039 [2024-11-20 07:43:35.075953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.039 [2024-11-20 07:43:35.075961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.039 [2024-11-20 07:43:35.075967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.039 [2024-11-20 07:43:35.087717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.039 [2024-11-20 07:43:35.088225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.039 [2024-11-20 07:43:35.088240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.039 [2024-11-20 07:43:35.088246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.039 [2024-11-20 07:43:35.088395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.039 [2024-11-20 07:43:35.088544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.039 [2024-11-20 07:43:35.088550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.039 [2024-11-20 07:43:35.088556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.039 [2024-11-20 07:43:35.088561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.039 [2024-11-20 07:43:35.100312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.039 [2024-11-20 07:43:35.100847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.039 [2024-11-20 07:43:35.100877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.039 [2024-11-20 07:43:35.100885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.039 [2024-11-20 07:43:35.101056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.039 [2024-11-20 07:43:35.101209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.039 [2024-11-20 07:43:35.101215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.039 [2024-11-20 07:43:35.101220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.039 [2024-11-20 07:43:35.101226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.039 [2024-11-20 07:43:35.112986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.039 [2024-11-20 07:43:35.113570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.039 [2024-11-20 07:43:35.113600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.039 [2024-11-20 07:43:35.113609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.039 [2024-11-20 07:43:35.113780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.039 [2024-11-20 07:43:35.113934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.039 [2024-11-20 07:43:35.113940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.039 [2024-11-20 07:43:35.113946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.039 [2024-11-20 07:43:35.113952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.039 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:17.039 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:17.039 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:17.039 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:17.039 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.039 [2024-11-20 07:43:35.125567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.039 [2024-11-20 07:43:35.126041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.039 [2024-11-20 07:43:35.126072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.039 [2024-11-20 07:43:35.126081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.039 [2024-11-20 07:43:35.126246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.039 [2024-11-20 07:43:35.126398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.039 [2024-11-20 07:43:35.126405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.039 [2024-11-20 07:43:35.126411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.040 [2024-11-20 07:43:35.126416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.040 [2024-11-20 07:43:35.138170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.040 [2024-11-20 07:43:35.138603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.040 [2024-11-20 07:43:35.138618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.040 [2024-11-20 07:43:35.138627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.040 [2024-11-20 07:43:35.138782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.040 [2024-11-20 07:43:35.138932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.040 [2024-11-20 07:43:35.138937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.040 [2024-11-20 07:43:35.138942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.040 [2024-11-20 07:43:35.138947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.040 [2024-11-20 07:43:35.150835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.040 [2024-11-20 07:43:35.151292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.040 [2024-11-20 07:43:35.151305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.040 [2024-11-20 07:43:35.151310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.040 [2024-11-20 07:43:35.151459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.040 [2024-11-20 07:43:35.151608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.040 [2024-11-20 07:43:35.151614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.040 [2024-11-20 07:43:35.151620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.040 [2024-11-20 07:43:35.151625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.040 [2024-11-20 07:43:35.163498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.040 [2024-11-20 07:43:35.164052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.040 [2024-11-20 07:43:35.164083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.040 [2024-11-20 07:43:35.164091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.040 [2024-11-20 07:43:35.164257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.040 [2024-11-20 07:43:35.164409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.040 [2024-11-20 07:43:35.164415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.040 [2024-11-20 07:43:35.164421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.040 [2024-11-20 07:43:35.164426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.040 [2024-11-20 07:43:35.169466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.040 [2024-11-20 07:43:35.176178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.040 [2024-11-20 07:43:35.176877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.040 [2024-11-20 07:43:35.176908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.040 [2024-11-20 07:43:35.176916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.040 [2024-11-20 07:43:35.177082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.040 [2024-11-20 07:43:35.177234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.040 [2024-11-20 07:43:35.177241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.040 [2024-11-20 07:43:35.177247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.040 [2024-11-20 07:43:35.177252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.040 [2024-11-20 07:43:35.188863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.040 [2024-11-20 07:43:35.189465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.040 [2024-11-20 07:43:35.189495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.040 [2024-11-20 07:43:35.189504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.040 [2024-11-20 07:43:35.189669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.040 [2024-11-20 07:43:35.189829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.040 [2024-11-20 07:43:35.189837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.040 [2024-11-20 07:43:35.189842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.040 [2024-11-20 07:43:35.189848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.040 Malloc0 00:29:17.040 [2024-11-20 07:43:35.201463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.040 [2024-11-20 07:43:35.201870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.040 [2024-11-20 07:43:35.201900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.040 [2024-11-20 07:43:35.201909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:17.040 [2024-11-20 07:43:35.202076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.040 [2024-11-20 07:43:35.202229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.040 [2024-11-20 07:43:35.202236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.040 [2024-11-20 07:43:35.202246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.040 [2024-11-20 07:43:35.202252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.040 [2024-11-20 07:43:35.214181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.040 [2024-11-20 07:43:35.214649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.040 [2024-11-20 07:43:35.214679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.040 [2024-11-20 07:43:35.214688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.040 [2024-11-20 07:43:35.214862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.040 [2024-11-20 07:43:35.215015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.040 [2024-11-20 07:43:35.215022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.040 [2024-11-20 07:43:35.215027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.040 [2024-11-20 07:43:35.215034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.040 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.040 [2024-11-20 07:43:35.226794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.040 [2024-11-20 07:43:35.227380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.040 [2024-11-20 07:43:35.227410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb280 with addr=10.0.0.2, port=4420 00:29:17.040 [2024-11-20 07:43:35.227419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb280 is same with the state(6) to be set 00:29:17.040 [2024-11-20 07:43:35.227584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb280 (9): Bad file descriptor 00:29:17.040 [2024-11-20 07:43:35.227737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.040 [2024-11-20 07:43:35.227743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.041 [2024-11-20 07:43:35.227755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.041 [2024-11-20 07:43:35.227762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.041 [2024-11-20 07:43:35.232401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.041 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.041 07:43:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3582391 00:29:17.041 [2024-11-20 07:43:35.239509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.302 [2024-11-20 07:43:35.261135] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:18.504 4561.43 IOPS, 17.82 MiB/s [2024-11-20T06:43:37.657Z] 5536.00 IOPS, 21.62 MiB/s [2024-11-20T06:43:39.039Z] 6306.22 IOPS, 24.63 MiB/s [2024-11-20T06:43:39.609Z] 6925.10 IOPS, 27.05 MiB/s [2024-11-20T06:43:40.993Z] 7443.36 IOPS, 29.08 MiB/s [2024-11-20T06:43:41.933Z] 7854.50 IOPS, 30.68 MiB/s [2024-11-20T06:43:42.872Z] 8202.85 IOPS, 32.04 MiB/s [2024-11-20T06:43:43.812Z] 8507.21 IOPS, 33.23 MiB/s [2024-11-20T06:43:43.812Z] 8778.87 IOPS, 34.29 MiB/s 00:29:25.602 Latency(us) 00:29:25.602 [2024-11-20T06:43:43.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.602 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:25.602 Verification LBA range: start 0x0 length 0x4000 00:29:25.602 Nvme1n1 : 15.01 8780.72 34.30 13333.92 0.00 5767.24 583.68 26978.99 00:29:25.602 [2024-11-20T06:43:43.812Z] =================================================================================================================== 00:29:25.602 [2024-11-20T06:43:43.812Z] Total : 8780.72 34.30 13333.92 0.00 5767.24 583.68 26978.99 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:25.602 rmmod nvme_tcp 00:29:25.602 rmmod nvme_fabrics 00:29:25.602 rmmod nvme_keyring 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3583709 ']' 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3583709 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 3583709 ']' 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 3583709 00:29:25.602 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:29:25.862 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:25.862 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3583709 00:29:25.862 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:25.862 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:25.862 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3583709' 00:29:25.862 killing process with pid 3583709 00:29:25.862 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 3583709 00:29:25.862 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 3583709 00:29:25.862 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:25.862 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:25.862 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:25.862 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:25.862 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:25.862 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:25.862 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:25.862 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:25.862 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:25.862 07:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.862 07:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.862 07:43:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.410 00:29:28.410 real 0m28.277s 00:29:28.410 user 1m2.910s 00:29:28.410 sys 0m7.816s 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:28.410 ************************************ 00:29:28.410 END TEST nvmf_bdevperf 00:29:28.410 ************************************ 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.410 ************************************ 00:29:28.410 START TEST nvmf_target_disconnect 00:29:28.410 ************************************ 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:28.410 * Looking for test storage... 00:29:28.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:28.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.410 --rc genhtml_branch_coverage=1 00:29:28.410 --rc genhtml_function_coverage=1 00:29:28.410 --rc genhtml_legend=1 00:29:28.410 --rc geninfo_all_blocks=1 00:29:28.410 --rc geninfo_unexecuted_blocks=1 00:29:28.410 00:29:28.410 ' 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:28.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.410 --rc genhtml_branch_coverage=1 00:29:28.410 --rc genhtml_function_coverage=1 00:29:28.410 --rc genhtml_legend=1 00:29:28.410 --rc geninfo_all_blocks=1 00:29:28.410 --rc geninfo_unexecuted_blocks=1 00:29:28.410 00:29:28.410 ' 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:28.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.410 --rc genhtml_branch_coverage=1 00:29:28.410 --rc genhtml_function_coverage=1 00:29:28.410 --rc genhtml_legend=1 00:29:28.410 --rc geninfo_all_blocks=1 00:29:28.410 --rc geninfo_unexecuted_blocks=1 00:29:28.410 00:29:28.410 ' 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:28.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.410 --rc genhtml_branch_coverage=1 00:29:28.410 --rc genhtml_function_coverage=1 00:29:28.410 --rc genhtml_legend=1 00:29:28.410 --rc geninfo_all_blocks=1 00:29:28.410 --rc geninfo_unexecuted_blocks=1 00:29:28.410 00:29:28.410 ' 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.410 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:28.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:28.411 07:43:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:36.556 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:36.557 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:36.557 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:36.557 Found net devices under 0000:31:00.0: cvl_0_0 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:36.557 Found net devices under 0000:31:00.1: cvl_0_1 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:36.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:29:36.557 00:29:36.557 --- 10.0.0.2 ping statistics --- 00:29:36.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.557 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:29:36.557 07:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:29:36.557 00:29:36.557 --- 10.0.0.1 ping statistics --- 00:29:36.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.557 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:36.558 ************************************ 00:29:36.558 START TEST nvmf_target_disconnect_tc1 00:29:36.558 ************************************ 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.558 [2024-11-20 07:43:54.238844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.558 [2024-11-20 07:43:54.238940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154af60 with addr=10.0.0.2, port=4420 00:29:36.558 [2024-11-20 07:43:54.238975] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:36.558 [2024-11-20 07:43:54.238990] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:36.558 [2024-11-20 07:43:54.238999] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:36.558 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:36.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:36.558 Initializing NVMe Controllers 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:36.558 00:29:36.558 real 0m0.150s 00:29:36.558 user 0m0.060s 00:29:36.558 sys 0m0.087s 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:36.558 ************************************ 00:29:36.558 END TEST nvmf_target_disconnect_tc1 00:29:36.558 ************************************ 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:36.558 ************************************ 00:29:36.558 START TEST nvmf_target_disconnect_tc2 00:29:36.558 ************************************ 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3589797 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3589797 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3589797 ']' 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:36.558 07:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.558 [2024-11-20 07:43:54.421739] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:29:36.558 [2024-11-20 07:43:54.421807] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.558 [2024-11-20 07:43:54.511564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:36.558 [2024-11-20 07:43:54.565180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.558 [2024-11-20 07:43:54.565255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.558 [2024-11-20 07:43:54.565265] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.558 [2024-11-20 07:43:54.565272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.558 [2024-11-20 07:43:54.565278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.559 [2024-11-20 07:43:54.567419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:36.559 [2024-11-20 07:43:54.567579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:36.559 [2024-11-20 07:43:54.567736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:36.559 [2024-11-20 07:43:54.567737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:37.131 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:37.131 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:37.131 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:37.131 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:37.131 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.131 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.131 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:37.131 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.131 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.131 Malloc0 00:29:37.131 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.131 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:37.131 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.131 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.131 [2024-11-20 07:43:55.320360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.131 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.131 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:37.131 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.131 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.393 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.393 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:37.393 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.393 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.393 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.393 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:37.393 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.393 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.393 [2024-11-20 07:43:55.360809] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.393 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.393 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:37.393 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.393 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.393 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.393 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3589952 00:29:37.393 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:37.393 07:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:39.313 07:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3589797 00:29:39.313 07:43:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Write completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Write completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Write completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Write completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Write completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Write completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Write completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Write completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Write completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Write completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Read completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 Write completed with error (sct=0, sc=8) 00:29:39.313 starting I/O failed 00:29:39.313 [2024-11-20 07:43:57.400978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.313 [2024-11-20 07:43:57.401331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.313 [2024-11-20 07:43:57.401365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.313 qpair failed and we were unable to recover it. 00:29:39.313 [2024-11-20 07:43:57.401743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.313 [2024-11-20 07:43:57.401765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.313 qpair failed and we were unable to recover it. 00:29:39.313 [2024-11-20 07:43:57.402206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.313 [2024-11-20 07:43:57.402261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.313 qpair failed and we were unable to recover it. 00:29:39.313 [2024-11-20 07:43:57.402665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.313 [2024-11-20 07:43:57.402682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.313 qpair failed and we were unable to recover it. 00:29:39.313 [2024-11-20 07:43:57.403243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.313 [2024-11-20 07:43:57.403299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.313 qpair failed and we were unable to recover it. 00:29:39.313 [2024-11-20 07:43:57.403637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.313 [2024-11-20 07:43:57.403652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.313 qpair failed and we were unable to recover it. 00:29:39.313 [2024-11-20 07:43:57.403859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.313 [2024-11-20 07:43:57.403876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.313 qpair failed and we were unable to recover it. 00:29:39.313 [2024-11-20 07:43:57.404106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.313 [2024-11-20 07:43:57.404119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.313 qpair failed and we were unable to recover it. 00:29:39.313 [2024-11-20 07:43:57.404440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.313 [2024-11-20 07:43:57.404452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.313 qpair failed and we were unable to recover it. 00:29:39.313 [2024-11-20 07:43:57.404809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.313 [2024-11-20 07:43:57.404821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.313 qpair failed and we were unable to recover it. 00:29:39.313 [2024-11-20 07:43:57.405217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.313 [2024-11-20 07:43:57.405230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.313 qpair failed and we were unable to recover it. 00:29:39.313 [2024-11-20 07:43:57.405544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.313 [2024-11-20 07:43:57.405558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.313 qpair failed and we were unable to recover it. 00:29:39.313 [2024-11-20 07:43:57.405718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.313 [2024-11-20 07:43:57.405730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.313 qpair failed and we were unable to recover it. 00:29:39.313 [2024-11-20 07:43:57.406022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.313 [2024-11-20 07:43:57.406038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.313 qpair failed and we were unable to recover it. 00:29:39.313 [2024-11-20 07:43:57.406349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.313 [2024-11-20 07:43:57.406361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.313 qpair failed and we were unable to recover it. 00:29:39.313 [2024-11-20 07:43:57.406438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.313 [2024-11-20 07:43:57.406449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.313 qpair failed and we were unable to recover it. 00:29:39.313 [2024-11-20 07:43:57.406690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.313 [2024-11-20 07:43:57.406701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.406996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.407014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.407288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.407300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.407601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.407612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.407850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.407861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.408185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.408196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.408408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.408418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.408724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.408734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.409117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.409128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.409452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.409463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.409813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.409824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.410165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.410177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.410381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.410393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.410723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.410735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.411047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.411059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.411296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.411308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.411485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.411497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.411770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.411782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.412061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.412071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.412394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.412404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.412606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.412617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.412864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.412875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.413162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.413172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.413491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.413501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.413893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.413904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.414266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.414276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.414612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.414622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.414847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.414858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.415158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.415168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.415479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.415488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.415871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.415881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.416231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.416240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.416435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.416445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.416788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.416798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.417026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.417037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.417252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.417263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.417436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.417447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.314 [2024-11-20 07:43:57.417768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.314 [2024-11-20 07:43:57.417778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.314 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.418019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.418031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.418341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.418351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.418575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.418585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.418743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.418760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.419031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.419043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.419269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.419279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.419650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.419660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.420025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.420036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.420355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.420365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.420653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.420663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.420881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.420892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.421258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.421268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.421626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.421638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.421824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.421836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.422110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.422121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.422443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.422452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.422798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.422808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.423198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.423208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.423582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.423592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.423827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.423839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.424114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.424125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.424444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.424455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.424678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.424690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.425091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.425104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.425446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.425458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.425797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.425809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.426164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.426176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.426508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.426521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.426854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.426867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.427194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.427207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.427536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.427549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.427764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.427779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.428176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.428189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.428406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.428419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.428775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.428790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.429119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.429131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.429463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.429477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.315 qpair failed and we were unable to recover it. 00:29:39.315 [2024-11-20 07:43:57.429711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.315 [2024-11-20 07:43:57.429724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.430093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.430105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.430418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.430430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.430788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.430802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.431141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.431154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.431520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.431532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.431765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.431781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.432139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.432151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.432551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.432563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.432770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.432784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.433113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.433125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.433418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.433430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.433661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.433673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.433873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.433887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.434119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.434132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.434376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.434390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.434731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.434757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.435021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.435038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.435291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.435314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.435559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.435575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.435792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.435811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.436209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.436226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.436551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.436567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.436904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.436927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.437291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.437307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.437642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.437658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.438086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.438103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.438412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.438428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.438656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.438673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.439013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.439031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.439356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.439372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.439740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.439764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.440148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.440165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.440391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.440409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.440766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.440784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.441131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.441148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.441464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.441481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.441815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.441834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.316 qpair failed and we were unable to recover it. 00:29:39.316 [2024-11-20 07:43:57.442127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.316 [2024-11-20 07:43:57.442143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.442482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.442499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.442825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.442842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.443172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.443188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.443569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.443586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.443928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.443944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.444285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.444301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.444675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.444697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.445022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.445043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.445270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.445288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.445628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.445645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.445976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.446002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.446309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.446327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.446586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.446607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.446863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.446884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.447146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.447166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.447417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.447441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.447657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.447680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.448044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.448065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.448307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.448328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.448560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.448581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.448824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.448848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.449197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.449218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.449556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.449577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.449949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.449970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.450311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.450333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.450556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.450577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.450917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.450939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.451296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.451316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.451639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.451660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.452013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.452034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.317 [2024-11-20 07:43:57.452349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.317 [2024-11-20 07:43:57.452369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.317 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.452692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.452712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.452909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.452932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.453290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.453312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.453671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.453693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.454133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.454155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.454505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.454525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.454963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.454985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.455332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.455353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.455666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.455686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.455969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.455991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.456317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.456339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.456739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.456777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.457107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.457135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.457417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.457449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.457808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.457838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.458189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.458217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.458495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.458530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.458905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.458934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.459358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.459386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.459769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.459800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.460162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.460190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.460548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.460576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.460828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.460859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.461234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.461263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.461579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.461606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.461841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.461871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.462138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.462166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.462547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.462575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.462715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.462744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.463170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.463198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.463563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.463592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.463836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.463866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.464247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.464275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.464632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.464662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.465035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.465065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.465436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.465465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.465832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.318 [2024-11-20 07:43:57.465861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.318 qpair failed and we were unable to recover it. 00:29:39.318 [2024-11-20 07:43:57.466272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.466300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.466717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.466752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.467186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.467214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.467577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.467605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.467985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.468014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.468401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.468429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.468803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.468834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.469205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.469233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.469490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.469517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.469851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.469880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.470260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.470288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.470639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.470666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.470983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.471013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.471387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.471415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.471631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.471658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.471887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.471917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.472281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.472308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.472693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.472721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.473124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.473153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.473444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.473477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.473812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.473843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.474094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.474127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.474453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.474482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.474885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.474914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.475278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.475307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.475676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.475702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.476070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.476099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.476476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.476504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.476877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.476907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.477284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.477313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.477685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.477712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.478148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.478177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.478555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.478584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.478899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.478930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.479293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.479320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.479679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.479708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.480146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.480176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.480529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.319 [2024-11-20 07:43:57.480556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.319 qpair failed and we were unable to recover it. 00:29:39.319 [2024-11-20 07:43:57.480913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.480942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.481315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.481343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.481704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.481734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.482006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.482034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.482416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.482444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.482803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.482832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.483223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.483250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.483621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.483649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.484084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.484115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.484466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.484496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.484868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.484896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.485122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.485153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.485556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.485584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.485989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.486020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.486271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.486301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.486668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.486697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.487038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.487069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.487432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.487460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.487870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.487900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.488254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.488283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.488647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.488675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.489027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.489063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.489405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.489434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.489806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.489835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.490160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.490196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.490529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.490557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.490912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.490941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.491307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.491335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.491703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.491731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.492007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.492036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.492412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.492440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.492800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.492830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.493189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.493218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.493598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.493627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.493978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.494008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.494379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.494407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.494768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.494797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.495069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.320 [2024-11-20 07:43:57.495097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.320 qpair failed and we were unable to recover it. 00:29:39.320 [2024-11-20 07:43:57.495472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.495501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.495871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.495901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.496256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.496284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.496651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.496679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.497052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.497082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.497454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.497484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.497821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.497852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.498206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.498236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.498487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.498519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.498922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.498951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.499309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.499338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.499705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.499733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.500097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.500127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.500498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.500527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.500872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.500902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.501269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.501298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.501594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.501622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.501993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.502023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.502311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.502339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.502701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.502729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.503098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.503127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.503491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.503519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.503865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.503893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.504263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.504297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.504626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.504657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.504988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.505017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.505404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.505433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.505797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.505826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.506237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.506265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.506608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.506638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.506981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.507011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.507374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.507401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.507806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.507836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.508211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.508238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.508603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.508630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.509052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.509082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.509430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.509458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.321 [2024-11-20 07:43:57.509806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.321 [2024-11-20 07:43:57.509836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.321 qpair failed and we were unable to recover it. 00:29:39.322 [2024-11-20 07:43:57.510093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.322 [2024-11-20 07:43:57.510120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.322 qpair failed and we were unable to recover it. 00:29:39.322 [2024-11-20 07:43:57.510346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.322 [2024-11-20 07:43:57.510373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.322 qpair failed and we were unable to recover it. 00:29:39.322 [2024-11-20 07:43:57.510815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.322 [2024-11-20 07:43:57.510844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.322 qpair failed and we were unable to recover it. 00:29:39.322 [2024-11-20 07:43:57.511199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.322 [2024-11-20 07:43:57.511226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.322 qpair failed and we were unable to recover it. 00:29:39.322 [2024-11-20 07:43:57.511399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.322 [2024-11-20 07:43:57.511429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.322 qpair failed and we were unable to recover it. 00:29:39.322 [2024-11-20 07:43:57.511783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.322 [2024-11-20 07:43:57.511813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.322 qpair failed and we were unable to recover it. 00:29:39.322 [2024-11-20 07:43:57.512193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.322 [2024-11-20 07:43:57.512221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.322 qpair failed and we were unable to recover it. 00:29:39.322 [2024-11-20 07:43:57.512587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.322 [2024-11-20 07:43:57.512616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.322 qpair failed and we were unable to recover it. 00:29:39.322 [2024-11-20 07:43:57.512869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.322 [2024-11-20 07:43:57.512902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.322 qpair failed and we were unable to recover it. 00:29:39.322 [2024-11-20 07:43:57.513149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.322 [2024-11-20 07:43:57.513177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.322 qpair failed and we were unable to recover it. 00:29:39.594 [2024-11-20 07:43:57.513518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.594 [2024-11-20 07:43:57.513550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.594 qpair failed and we were unable to recover it. 00:29:39.594 [2024-11-20 07:43:57.513916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.594 [2024-11-20 07:43:57.513947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.594 qpair failed and we were unable to recover it. 00:29:39.594 [2024-11-20 07:43:57.515815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.594 [2024-11-20 07:43:57.515882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.594 qpair failed and we were unable to recover it. 00:29:39.594 [2024-11-20 07:43:57.516293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.594 [2024-11-20 07:43:57.516328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.594 qpair failed and we were unable to recover it. 00:29:39.594 [2024-11-20 07:43:57.516697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.594 [2024-11-20 07:43:57.516725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.594 qpair failed and we were unable to recover it. 00:29:39.594 [2024-11-20 07:43:57.517084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.594 [2024-11-20 07:43:57.517114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.594 qpair failed and we were unable to recover it. 00:29:39.594 [2024-11-20 07:43:57.517368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.517396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.517798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.517829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.518202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.518231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.518609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.518639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.519000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.519030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.519286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.519318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.519676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.519705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.520071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.520100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.520473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.520509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.520874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.520913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.521287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.521316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.521676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.521703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.521990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.522019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.522275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.522306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.522677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.522705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.523066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.523096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.523458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.523486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.523726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.523769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.524062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.524091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.524464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.524492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.524865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.524895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.525248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.525277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.525638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.525667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.525938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.525968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.526334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.526363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.526727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.526828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.527248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.527276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.527611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.527640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.528020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.528050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.528401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.528430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.528673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.528706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.529084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.529115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.529481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.529511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.529880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.529911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.530268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.530298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.530639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.530668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.531035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.531065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.531422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.595 [2024-11-20 07:43:57.531451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.595 qpair failed and we were unable to recover it. 00:29:39.595 [2024-11-20 07:43:57.531802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.531830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.532244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.532274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.532628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.532656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.532997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.533026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.533430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.533459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.533815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.533845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.534222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.534251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.534627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.534655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.534918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.534947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.535314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.535342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.535701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.535729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.536081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.536116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.536446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.536473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.536836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.536865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.537209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.537240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.537607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.537636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.538021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.538051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.538491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.538519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.538892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.538920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.539262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.539291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.539655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.539684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.540019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.540048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.540408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.540438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.540798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.540827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.541282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.541309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.541548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.541582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.541939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.541970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.542380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.542408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.542788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.542819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.543162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.543191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.543531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.543560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.543800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.543833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.544218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.544247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.544607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.544635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.544977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.545005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.545446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.545474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.545821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.545851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.596 [2024-11-20 07:43:57.546199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.596 [2024-11-20 07:43:57.546228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.596 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.546491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.546521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.546890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.546920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.547294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.547322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.547580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.547608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.547971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.548000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.548356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.548384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.548764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.548795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.549159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.549188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.549560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.549587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.549963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.549992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.550373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.550401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.550788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.550819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.551186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.551214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.551468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.551502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.551932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.551961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.552321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.552348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.552727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.552764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.553113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.553142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.553509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.553536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.553895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.553925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.554285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.554313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.554687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.554714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.555069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.555098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.555461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.555490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.555848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.555877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.556223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.556251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.556610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.556638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.556981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.557011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.557367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.557395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.557639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.557667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.558013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.558044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.558394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.558423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.558792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.558820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.559180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.559207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.559579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.559608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.559974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.560003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.560384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.560411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.597 qpair failed and we were unable to recover it. 00:29:39.597 [2024-11-20 07:43:57.560794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.597 [2024-11-20 07:43:57.560824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.561188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.561216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.561600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.561629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.561979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.562009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.562371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.562399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.562766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.562795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.563165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.563193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.563558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.563586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.563973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.564003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.564347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.564376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.564782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.564812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.565154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.565183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.565549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.565576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.565838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.565867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.566237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.566266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.566632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.566661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.566951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.566981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.567344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.567373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.567721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.567758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.568129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.568156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.568530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.568559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.568926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.568957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.569317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.569345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.569717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.569755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.570052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.570079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.570441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.570471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.570830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.570863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.571234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.571262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.571623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.571650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.572036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.572064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.572322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.572355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.572694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.572723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.573108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.573137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.573497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.573525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.573771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.573804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.574087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.574115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.574450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.574478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.574842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.598 [2024-11-20 07:43:57.574871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.598 qpair failed and we were unable to recover it. 00:29:39.598 [2024-11-20 07:43:57.575197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.575225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.575586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.575615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.575978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.576010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.576365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.576393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.576779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.576809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.577194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.577229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.577585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.577614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.577858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.577887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.578098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.578127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.578498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.578526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.578791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.578820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.579172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.579200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.579548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.579587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.579827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.579857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.580286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.580315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.580677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.580707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.581092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.581121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.581390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.581418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.581766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.581795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.582128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.582158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.582514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.582542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.582785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.582817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.583235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.583263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.583500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.583531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.583905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.583934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.584313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.584341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.584718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.584767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.585136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.585165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.585530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.585559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.585922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.585951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.586326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.586354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.599 [2024-11-20 07:43:57.586792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.599 [2024-11-20 07:43:57.586822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.599 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.587203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.587231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.587585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.587613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.587904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.587933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.588233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.588261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.588446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.588477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.588719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.588760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.591111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.591178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.591603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.591638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.591980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.592012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.592378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.592406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.592789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.592820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.593195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.593224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.593601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.593630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.594012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.594051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.594404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.594432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.594795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.594824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.595199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.595226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.595582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.595611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.595985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.596014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.596380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.596408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.596769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.596797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.597197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.597225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.597578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.597606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.597980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.598009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.598365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.598393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.598765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.598795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.599150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.599178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.599549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.599578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.599953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.599985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.600250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.600277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.600518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.600547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.600905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.600935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.601302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.601330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.601683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.601710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.602082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.602112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.602477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.602505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.600 qpair failed and we were unable to recover it. 00:29:39.600 [2024-11-20 07:43:57.602777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.600 [2024-11-20 07:43:57.602805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.603189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.603217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.603589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.603618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.603971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.604000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.604372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.604400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.604767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.604798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.605141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.605170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.605525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.605553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.605932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.605961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.606318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.606345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.606704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.606732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.607107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.607137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.607520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.607547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.607911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.607941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.608199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.608231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.608574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.608602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.608939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.608972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.609335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.609371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.609735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.609777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.610150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.610179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.610539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.610566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.610889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.610917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.611286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.611314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.611673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.611703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.611956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.611988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.612318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.612346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.612710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.612738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.613119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.613149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.613508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.613538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.613910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.613940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.614320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.614350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.614600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.614633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.614989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.615020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.615253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.615282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.615628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.615658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.616009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.616039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.616400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.616428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.616693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.601 [2024-11-20 07:43:57.616721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.601 qpair failed and we were unable to recover it. 00:29:39.601 [2024-11-20 07:43:57.617128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.617157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.617522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.617550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.617792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.617822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.618157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.618185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.618543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.618573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.618828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.618857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.619252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.619281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.619647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.619675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.620100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.620129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.620483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.620513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.620868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.620899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.621142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.621174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.621550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.621578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.621933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.621962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.622399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.622427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.622708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.622737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.622878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.622906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.623169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.623201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.623546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.623575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.623943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.623978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.624317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.624346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.624712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.624740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.625108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.625136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.625500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.625528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.625772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.625805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.626236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.626265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.626658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.626686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.627035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.627065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.627193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.627224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.627594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.627624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.627975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.628006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.628375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.628404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.628781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.628811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.629223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.629251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.629582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.629611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.629852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.629884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.630260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.630288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.630520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.602 [2024-11-20 07:43:57.630550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.602 qpair failed and we were unable to recover it. 00:29:39.602 [2024-11-20 07:43:57.630900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.630929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.631177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.631208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.631581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.631610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.631975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.632007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.632238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.632270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.632623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.632652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.633002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.633031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.633291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.633317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.633676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.633705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.634003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.634033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.634392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.634420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.634773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.634801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.635172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.635200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.635566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.635593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.635982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.636013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.636419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.636448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.636833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.636864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.637239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.637267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.637423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.637451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.637812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.637841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.638212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.638242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.638608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.638643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.638978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.639010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.639365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.639393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.639770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.639800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.640138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.640167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.640457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.640487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.640757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.640790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.641056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.641085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.641459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.641487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.641842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.641871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.642308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.642336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.642587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.642615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.642966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.642996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.643370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.643398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.643775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.643804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.603 qpair failed and we were unable to recover it. 00:29:39.603 [2024-11-20 07:43:57.644188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.603 [2024-11-20 07:43:57.644217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.644582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.644611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.644983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.645012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.645387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.645414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.645770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.645799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.646164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.646192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.646552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.646581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.646978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.647008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.647228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.647259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.647504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.647535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.647909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.647937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.648175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.648206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.648486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.648513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.648895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.648924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.649271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.649300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.649670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.649697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.649947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.649976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.650326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.650354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.650713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.650740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.651004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.651032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.651383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.651410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.651674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.651701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.652061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.652090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.652468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.652498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.652777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.652808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.653159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.653195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.653578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.653606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.653767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.653795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.654269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.654298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.654542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.654571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.654936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.654966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.655227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.655257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.655595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.655624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.655891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.655921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.656309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.656338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.604 [2024-11-20 07:43:57.656582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.604 [2024-11-20 07:43:57.656614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.604 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.656987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.657024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.657268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.657297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.657669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.657698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.658146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.658178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.658542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.658573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.658982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.659012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.659256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.659286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.659575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.659602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.659864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.659896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.660320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.660350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.660724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.660762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.661140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.661172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.661326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.661358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.661741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.661782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.662204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.662234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.662453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.662484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.662783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.662814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.663087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.663115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.663479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.663507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.663866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.663896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.664281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.664310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.664564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.664592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.664875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.664906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.665292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.665320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.665682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.665710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.666135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.666165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.666499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.666526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.666886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.666916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.667294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.667323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.667687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.667725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.668122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.668152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.668563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.668593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.668850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.668880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.669232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.669261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.669641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.669671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.669851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.669884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.670212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.670242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.605 qpair failed and we were unable to recover it. 00:29:39.605 [2024-11-20 07:43:57.670535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.605 [2024-11-20 07:43:57.670564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.670881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.670911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.671283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.671311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.671663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.671692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.672131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.672160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.672535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.672564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.672930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.672962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.673391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.673419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.673668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.673700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.673976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.674006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.674279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.674310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.674706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.674734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.675162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.675191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.675638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.675666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.676017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.676045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.676389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.676417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.676668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.676698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.677087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.677118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.677480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.677510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.677776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.677809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.678191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.678220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.678483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.678511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.678936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.678968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.679319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.679348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.679727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.679763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.680025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.680052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.680439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.680467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.680842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.680873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.681249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.681278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.681645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.681674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.682051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.682081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.682371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.682399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.682781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.682818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.683194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.683225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.683585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.683613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.683903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.683932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.684296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.684326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.684691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.606 [2024-11-20 07:43:57.684719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.606 qpair failed and we were unable to recover it. 00:29:39.606 [2024-11-20 07:43:57.685136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.685167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.685522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.685553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.685807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.685839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.686187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.686215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.686450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.686480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.686875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.686904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.687258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.687287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.687597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.687627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.687963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.687994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.688330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.688360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.688685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.688714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.689189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.689219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.689565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.689594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.689900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.689932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.690290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.690318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.690551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.690584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.690987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.691018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.691374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.691402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.691775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.691806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.692105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.692133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.692488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.692518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.692871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.692905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.693236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.693266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.693557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.693585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.693841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.693875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.694236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.694265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.694653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.694682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.695044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.695075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.695456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.695485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.695869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.695901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.696241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.696270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.696631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.696661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.697048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.697077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.697431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.697461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.697829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.697865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.698090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.698123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.698466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.698495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.607 [2024-11-20 07:43:57.698856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.607 [2024-11-20 07:43:57.698887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.607 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.699242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.699271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.699484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.699514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.699866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.699897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.700239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.700274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.700612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.700643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.701005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.701036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.701275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.701304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.701660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.701689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.702030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.702062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.702426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.702457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.702716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.702756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.703144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.703173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.703541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.703571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.703925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.703957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.704304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.704335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.704677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.704707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.705092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.705121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.705511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.705541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.705896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.705928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.706161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.706193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.706565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.706595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.706865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.706895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.707249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.707278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.707534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.707565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.707917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.707948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.708317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.708345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.708707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.708737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.709104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.709135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.709378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.709410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.709767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.709800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.710185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.710217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.710566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.710597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.710942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.608 [2024-11-20 07:43:57.710973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.608 qpair failed and we were unable to recover it. 00:29:39.608 [2024-11-20 07:43:57.711339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.711367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.711724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.711764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.712120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.712149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.712398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.712435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.712837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.712871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.713233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.713262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.713698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.713726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.714026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.714055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.714306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.714337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.714688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.714716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.715092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.715125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.715370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.715402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.715755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.715784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.716137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.716167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.716576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.716607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.716971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.717002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.717350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.717381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.717738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.717796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.718184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.718215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.718580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.718610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.718974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.719007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.719361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.719390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.719731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.719769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.720117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.720147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.720498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.720529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.720891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.720924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.721298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.721327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.721662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.721689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.722061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.722092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.722434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.722464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.722829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.722859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.723192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.723220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.723592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.723623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.724008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.724040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.724386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.724416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.724696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.724731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.725130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.725161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.725378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.609 [2024-11-20 07:43:57.725405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.609 qpair failed and we were unable to recover it. 00:29:39.609 [2024-11-20 07:43:57.725768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.725800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.726207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.726237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.726599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.726629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.727001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.727032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.727394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.727425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.727786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.727822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.728182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.728210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.728464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.728492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.728867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.728896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.729263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.729291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.729656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.729684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.729949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.729981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.730333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.730363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.730734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.730772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.731127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.731155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.731530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.731558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.731927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.731958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.732329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.732356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.732726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.732762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.733114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.733142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.733498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.733526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.733871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.733902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.734313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.734340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.734700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.734728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.735092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.735122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.735356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.735387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.735743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.735785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.735961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.735989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.736356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.736392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.736763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.736792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.737195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.737223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.737580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.737607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.737975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.738005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.738382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.738410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.738570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.738601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.738983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.739012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.739382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.739409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.610 [2024-11-20 07:43:57.739780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.610 [2024-11-20 07:43:57.739808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.610 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.740151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.740180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.740524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.740551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.740916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.740945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.741307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.741335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.741707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.741734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.742111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.742140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.742564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.742592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.742940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.742976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.743330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.743357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.743725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.743762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.744114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.744142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.744511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.744539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.744908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.744945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.745310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.745337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.745744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.745781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.746138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.746167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.746415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.746443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.746779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.746810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.747184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.747212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.747439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.747470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.747921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.747950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.748309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.748337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.748698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.748727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.749090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.749118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.749357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.749385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.749760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.749789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.750069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.750097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.750446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.750473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.750825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.750856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.751216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.751244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.751606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.751634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.752004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.752034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.752410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.752438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.752817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.752845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.753256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.753286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.753543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.753572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.753926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.753954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.611 [2024-11-20 07:43:57.754337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.611 [2024-11-20 07:43:57.754365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.611 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.754719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.754760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.755122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.755150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.755503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.755531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.755873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.755903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.756234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.756263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.756629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.756656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.756905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.756935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.757333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.757361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.757589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.757619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.757867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.757907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.758126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.758158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.758517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.758545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.758904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.758934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.759295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.759323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.759615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.759642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.759990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.760020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.760397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.760425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.760673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.760704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.760982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.761013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.761425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.761453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.761783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.761813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.762162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.762189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.762418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.762448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.762810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.762841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.763201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.763230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.763588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.763616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.763970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.764000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.764361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.764389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.764760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.764789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.765152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.765180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.765547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.765574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.765971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.765999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.766351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.766381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.766757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.766787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.767151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.767178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.767545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.767573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.767802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.767841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.612 qpair failed and we were unable to recover it. 00:29:39.612 [2024-11-20 07:43:57.768182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.612 [2024-11-20 07:43:57.768212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.768561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.768588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.768984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.769013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.769372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.769400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.769794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.769823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.770188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.770215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.770564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.770592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.770973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.771002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.771254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.771281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.771671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.771699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.772051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.772079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.772440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.772467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.772823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.772853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.773214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.773242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.773608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.773635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.774072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.774100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.774434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.774462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.774825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.774853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.775228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.775255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.775625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.775653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.775997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.776025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.776400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.776427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.776782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.776811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.777147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.777175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.777520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.777548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.777850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.777879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.778125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.778153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.778511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.778539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.778805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.778834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.779189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.779216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.779568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.779598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.779970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.780000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.780351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.780378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.780755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.613 [2024-11-20 07:43:57.780784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.613 qpair failed and we were unable to recover it. 00:29:39.613 [2024-11-20 07:43:57.781132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.781160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.781523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.781550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.781891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.781921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.782333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.782361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.782703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.782730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.783128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.783164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.783528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.783556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.783918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.783948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.784314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.784342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.784705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.784732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.785098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.785126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.785470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.785498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.785876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.785905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.786151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.786178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.786447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.786475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.786830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.786860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.787110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.787138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.787501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.787528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.787892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.787922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.788302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.788330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.788693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.614 [2024-11-20 07:43:57.788722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.614 qpair failed and we were unable to recover it. 00:29:39.614 [2024-11-20 07:43:57.789094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.789123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.789487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.789518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.789780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.789810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.790073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.790100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.790477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.790504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.790863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.790891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.791259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.791287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.791549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.791576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.791930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.791960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.792302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.792332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.792684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.792712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.793073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.793103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.793491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.793519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.793870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.793900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.794303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.794332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.794685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.794712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.795069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.795107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.795485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.795513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.795765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.795798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.796146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.796174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.796533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.796561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.796923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.796951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.797299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.797327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.797621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.797649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.798020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.798056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.798417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.798446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.798793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.798822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.799202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.887 [2024-11-20 07:43:57.799229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.887 qpair failed and we were unable to recover it. 00:29:39.887 [2024-11-20 07:43:57.799632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.799660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.800023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.800052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.800430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.800457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.800820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.800848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.801202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.801230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.801598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.801625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.802061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.802090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.802451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.802480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.802717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.802758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.803144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.803173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.803542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.803570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.803929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.803958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.804327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.804354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.804724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.804761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.805115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.805142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.805486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.805514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.805826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.805855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.806101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.806133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.806494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.806523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.806889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.806917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.807291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.807320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.807690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.807717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.808001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.808030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.808381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.808409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.808803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.808833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.809051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.809082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.809457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.809485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.809803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.809840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.810085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.810116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.810455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.810484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.810856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.810884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.811237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.811265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.811596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.811624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.812016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.812045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.812414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.812441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.812793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.812822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.813188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.813229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.888 [2024-11-20 07:43:57.813589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.888 [2024-11-20 07:43:57.813616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.888 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.813988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.814019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.814377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.814405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.814648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.814677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.815045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.815074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.815362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.815389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.815770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.815799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.816168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.816196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.816558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.816586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.816968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.816996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.817371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.817398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.817775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.817804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.818172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.818200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.818567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.818595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.818957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.818987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.819315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.819343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.819703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.819731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.820123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.820152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.820517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.820545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.820921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.820950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.821200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.821232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.821659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.821687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.822039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.822070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.822432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.822462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.822824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.822853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.823261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.823289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.823660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.823688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.824041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.824070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.824431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.824458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.824822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.824850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.825203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.825230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.825585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.825613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.825930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.825959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.826338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.826367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.826721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.826766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.827128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.827157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.827527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.827555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.827916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.889 [2024-11-20 07:43:57.827946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.889 qpair failed and we were unable to recover it. 00:29:39.889 [2024-11-20 07:43:57.828295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.828323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.828686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.828719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.829082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.829111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.829468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.829496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.829904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.829933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.830189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.830217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.830462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.830494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.830905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.830935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.831369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.831397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.831783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.831812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.832176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.832203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.832559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.832587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.832980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.833008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.833255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.833286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.833643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.833671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.834046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.834075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.834438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.834465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.834823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.834851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.835194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.835222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.835590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.835617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.836052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.836081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.836440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.836468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.836822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.836851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.837220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.837248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.837595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.837622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.837998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.838027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.838399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.838426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.838781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.838810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.839197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.839225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.839603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.839630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.839991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.840030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.840419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.840447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.840845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.840874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.841149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.841177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.841587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.841615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.841982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.842011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.842374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.842402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.842771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.890 [2024-11-20 07:43:57.842799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.890 qpair failed and we were unable to recover it. 00:29:39.890 [2024-11-20 07:43:57.843148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.843175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.843539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.843567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.843834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.843863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.844250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.844283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.844652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.844680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.844935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.844963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.845183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.845214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.845654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.845682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.845973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.846001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.846383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.846411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.846779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.846809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.847169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.847197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.847579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.847607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.847977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.848007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.848257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.848288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.848554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.848583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.848942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.848970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.849318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.849347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.849703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.849731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.850080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.850108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.850473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.850501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.850792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.850821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.851178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.851205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.851574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.851601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.852006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.852035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.852408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.852436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.852788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.852816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.853236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.853263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.853513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.853541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.853892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.853921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.854290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.854317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.854684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.891 [2024-11-20 07:43:57.854712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.891 qpair failed and we were unable to recover it. 00:29:39.891 [2024-11-20 07:43:57.855083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.855113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.855495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.855523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.855793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.855821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.856230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.856258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.856623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.856651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.856976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.857004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.857379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.857407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.857717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.857752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.858125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.858153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.858568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.858597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.859006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.859036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.859405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.859437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.859797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.859827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.860174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.860202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.860624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.860652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.860977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.861006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.861441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.861469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.861832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.861860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.862314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.862342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.862683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.862711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.863067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.863096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.863514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.863541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.863888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.863923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.864307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.864334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.864679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.864707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.865070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.865100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.865454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.865481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.865871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.865899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.866252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.866279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.866659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.866687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.867030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.867059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.867424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.867454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.867808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.867837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.868198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.868227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.868568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.868596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.869054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.869082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.869405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.869434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.892 [2024-11-20 07:43:57.869702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.892 [2024-11-20 07:43:57.869729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.892 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.870031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.870060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.870311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.870343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.870709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.870737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.871006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.871037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.871396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.871423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.871792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.871822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.872225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.872252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.872624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.872651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.873021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.873049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.873406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.873434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.873778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.873813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.874209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.874236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.874600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.874627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.875003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.875039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.875394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.875425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.875779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.875808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.876163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.876191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.876562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.876590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.876961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.876990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.877350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.877378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.877767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.877795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.878199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.878226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.878574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.878602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.878972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.879001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.879365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.879392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.879779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.879808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.880178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.880206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.880563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.880591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.880970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.880998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.881339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.881367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.881721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.881756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.882128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.882156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.882536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.882564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.882928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.882958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.883222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.883250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.883506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.883538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.883800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.893 [2024-11-20 07:43:57.883831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.893 qpair failed and we were unable to recover it. 00:29:39.893 [2024-11-20 07:43:57.884087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.884119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.884464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.884493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.884862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.884891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.885260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.885288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.885627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.885655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.886021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.886051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.886417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.886445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.886795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.886824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.887209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.887237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.887604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.887631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.887992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.888022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.888387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.888415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.888792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.888820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.889067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.889099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.889451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.889479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.889803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.889833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.890180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.890219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.890635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.890663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.891013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.891041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.891399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.891428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.891818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.891867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.892227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.892256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.892628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.892656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.893030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.893059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.893413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.893440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.893840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.893869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.894079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.894110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.894471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.894498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.894866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.894894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.895259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.895287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.895562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.895591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.895969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.895998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.896340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.896368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.896735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.896772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.896993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.897024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.897399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.897428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.897800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.897829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.894 [2024-11-20 07:43:57.898081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.894 [2024-11-20 07:43:57.898113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.894 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.898472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.898500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.898786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.898815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.899202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.899230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.899597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.899624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.899993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.900023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.900393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.900422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.900788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.900817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.901178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.901206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.901537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.901564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.901991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.902020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.902391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.902419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.902831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.902860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.903273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.903302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.903655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.903683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.904059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.904090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.904453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.904482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.904839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.904867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.905228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.905257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.905627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.905664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.906024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.906054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.906414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.906442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.906807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.906836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.907198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.907227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.907664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.907692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.908063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.908093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.908450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.908480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.908857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.908887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.909257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.909286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.909654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.909681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.910044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.910075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.910441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.910470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.910823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.910852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.911215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.911244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.911612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.911640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.911977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.912011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.912349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.912378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.912737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.912779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.895 qpair failed and we were unable to recover it. 00:29:39.895 [2024-11-20 07:43:57.913159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.895 [2024-11-20 07:43:57.913187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.913555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.913584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.913953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.913983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.914418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.914447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.914802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.914833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.915208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.915237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.915619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.915649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.916024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.916055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.916411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.916441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.916816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.916845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.917202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.917231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.917590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.917621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.917971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.918000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.918358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.918388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.918757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.918787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.919133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.919162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.919524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.919554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.919920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.919950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.920337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.920366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.920727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.920781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.921096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.921125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.921483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.921517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.921880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.921910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.922298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.922328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.922685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.922713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.922991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.923020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.923386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.923416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.923783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.923814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.924197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.924226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.924605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.924633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.925026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.925055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.925432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.925460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.925827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.925857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.926224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.896 [2024-11-20 07:43:57.926254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.896 qpair failed and we were unable to recover it. 00:29:39.896 [2024-11-20 07:43:57.926616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.926645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.926921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.926951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.927300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.927329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.927695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.927725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.928093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.928123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.928483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.928512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.928866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.928897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.929272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.929300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.929734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.929774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.930205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.930233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.930590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.930620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.930987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.931018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.931373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.931410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.931783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.931814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.932070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.932103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.932542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.932571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.932943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.932975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.933375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.933404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.933753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.933783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.934080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.934110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.934474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.934504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.934874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.934903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.935281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.935312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.935698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.935728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.936125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.936163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.936507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.936537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.936897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.936927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.937299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.937335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.937585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.937613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.937983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.938013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.938371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.938402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.938768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.938798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.939195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.939223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.939595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.939625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.940015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.940045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.940387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.940415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.940828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.940857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.897 [2024-11-20 07:43:57.941282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.897 [2024-11-20 07:43:57.941313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.897 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.941673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.941702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.942010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.942042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.942385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.942414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.942866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.942897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.943290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.943318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.943673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.943704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.944080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.944113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.944528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.944558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.944912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.944943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.945295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.945325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.945689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.945717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.946088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.946119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.946413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.946443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.946686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.946717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.947015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.947044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.947396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.947426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.947781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.947813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.948169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.948198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.948467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.948495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.948854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.948884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.949255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.949286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.949645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.949674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.950039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.950068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.950430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.950459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.950711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.950742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.951081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.951110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.951450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.951480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.951838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.951869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.952212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.952243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.952610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.952643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.952987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.953025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.953359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.953388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.953794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.953826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.954189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.954217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.954564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.954593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.954907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.954937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.955294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.955322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.898 [2024-11-20 07:43:57.955766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.898 [2024-11-20 07:43:57.955797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.898 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.956166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.956198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.956539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.956574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.956926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.956957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.957321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.957349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.957710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.957740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.958033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.958066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.958477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.958507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.958864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.958895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.959287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.959317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.959666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.959694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.960097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.960128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.960492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.960520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.960881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.960914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.961274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.961302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.961666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.961695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.962051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.962081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.962458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.962485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.962740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.962776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.963146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.963181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.963418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.963446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.963802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.963832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.964113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.964141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.964496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.964523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.964775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.964808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.965182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.965211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.965571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.965599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.965973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.966002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.966367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.966395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.966808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.966836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.967182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.967210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.967573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.967600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.967969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.967998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.968360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.968388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.968639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.968666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.969041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.969070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.969434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.969461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.969836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.969865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.899 qpair failed and we were unable to recover it. 00:29:39.899 [2024-11-20 07:43:57.970220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.899 [2024-11-20 07:43:57.970248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.970618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.970646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.971002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.971031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.971383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.971411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.971786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.971816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.972163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.972190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.972568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.972596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.972957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.972987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.973381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.973408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.973775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.973804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.974150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.974179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.974545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.974573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.974939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.974969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.975303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.975330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.975655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.975685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.975940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.975969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.976330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.976357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.976732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.976770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.977172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.977200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.977565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.977592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.977838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.977871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.978135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.978169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.978523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.978551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.978919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.978949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.979319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.979347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.979699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.979727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.980173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.980204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.980570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.980598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.980960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.980989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.981346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.981375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.981789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.981818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.982158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.982186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.982561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.982589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.982932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.982960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.983338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.983366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.983713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.983742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.900 qpair failed and we were unable to recover it. 00:29:39.900 [2024-11-20 07:43:57.984120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.900 [2024-11-20 07:43:57.984148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.984508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.984536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.984797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.984829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.985208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.985236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.985485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.985517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.985891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.985920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.986247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.986275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.986723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.986758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.987125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.987153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.987500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.987528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.987873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.987902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.988233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.988262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.988617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.988645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.988988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.989018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.989284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.989312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.989659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.989687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.990071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.990100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.990465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.990493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.990866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.990895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.991265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.991293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.991541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.991572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.991933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.991963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.992328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.992356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.992731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.992767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.993114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.993142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.993442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.993476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.993829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.993859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.994252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.994280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.994650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.994677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.995040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.995071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.995483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.995511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.995892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.995920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.996261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.996289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.901 qpair failed and we were unable to recover it. 00:29:39.901 [2024-11-20 07:43:57.996532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.901 [2024-11-20 07:43:57.996559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:57.996856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:57.996885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:57.997153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:57.997184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:57.997547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:57.997575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:57.997943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:57.997972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:57.998341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:57.998368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:57.998744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:57.998786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:57.999126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:57.999154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:57.999293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:57.999323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:57.999708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:57.999736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.000043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.000071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.000428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.000456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.000820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.000850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.001110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.001138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.001472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.001499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.001866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.001897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.002232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.002259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.002641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.002669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.003017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.003047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.003406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.003434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.003877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.003906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.004268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.004298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.004682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.004709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.005070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.005100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.005454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.005482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.005862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.005891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.006252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.006280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.006633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.006661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.006916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.006949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.007316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.007346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.007699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.007728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.008102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.008132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.008505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.008540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.008906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.008935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.009300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.009328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.009690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.009717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.010159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.010188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.010548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.902 [2024-11-20 07:43:58.010576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.902 qpair failed and we were unable to recover it. 00:29:39.902 [2024-11-20 07:43:58.011012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.011042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.011374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.011401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.011845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.011873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.012218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.012246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.012609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.012636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.012981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.013010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.013377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.013405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.013769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.013797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.014030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.014062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.014417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.014447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.014812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.014841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.015059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.015089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.015326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.015357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.015703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.015731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.016128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.016156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.016519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.016548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.016853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.016882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.017262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.017291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.017658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.017686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.018059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.018088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.018444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.018472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.018824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.018853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.019218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.019248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.019487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.019515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.019878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.019908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.020285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.020313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.020663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.020691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.021088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.021118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.021446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.021475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.021870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.021899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.022244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.022273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.022534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.022561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.022969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.022998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.023354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.023383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.023609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.023645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.023997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.024028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.024370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.024397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.903 [2024-11-20 07:43:58.024763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.903 [2024-11-20 07:43:58.024793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.903 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.025150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.025178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.025476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.025506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.025757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.025787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.026137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.026165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.026418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.026449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.026880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.026910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.027281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.027310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.027662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.027689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.028062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.028091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.028313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.028343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.028730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.028767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.029100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.029130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.029479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.029507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.029774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.029803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.030026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.030058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.030425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.030453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.030801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.030831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.031203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.031231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.031581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.031608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.031971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.032001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.032374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.032401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.032765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.032794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.033157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.033185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.033557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.033585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.033970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.034000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.034363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.034391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.034756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.034786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.035143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.035171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.035528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.035556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.035924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.035953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.036204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.036233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.036599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.036627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.037004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.037034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.037325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.037353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.037714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.037742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.038111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.038139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.038487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.038521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.904 [2024-11-20 07:43:58.038874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.904 [2024-11-20 07:43:58.038904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.904 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.039146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.039177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.039526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.039562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.039834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.039864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.040210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.040238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.040618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.040646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.041047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.041076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.041409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.041437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.041804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.041832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.042098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.042126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.042481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.042508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.042872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.042901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.043295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.043322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.043560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.043592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.043981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.044012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.044366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.044393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.044779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.044808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.045208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.045236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.045622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.045651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.045900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.045932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.046196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.046224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.046585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.046614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.046964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.046994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.047260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.047287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.047716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.047744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.048049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.048081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.048446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.048474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.048851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.048881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.049232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.049260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.049628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.049656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.050002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.050031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.050392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.050421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.050792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.050821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.051221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.051249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.905 [2024-11-20 07:43:58.051623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.905 [2024-11-20 07:43:58.051652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.905 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.051983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.052014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.052380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.052408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.052744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.052784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.053149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.053176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.053427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.053461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.053840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.053869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.054231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.054260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.054607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.054635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.054980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.055009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.055370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.055398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.055620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.055651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.056011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.056040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.056292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.056320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.056724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.056760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.057114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.057143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.057509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.057536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.057883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.057913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.058358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.058386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.058731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.058768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.059108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.059136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.059584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.059612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.059990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.060019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.060370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.060397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.060766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.060796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.061164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.061193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.061560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.061587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.061937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.061967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.062302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.062330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.062711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.062738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.062938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.062966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.063301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.063329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.063680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.063709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.064100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.064130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.064467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.064496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.064834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.064863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.065216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.065245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.065495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.906 [2024-11-20 07:43:58.065522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.906 qpair failed and we were unable to recover it. 00:29:39.906 [2024-11-20 07:43:58.065878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.065907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.066310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.066338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.066699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.066727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.067115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.067143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.067504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.067532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.067898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.067928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.068346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.068373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.068739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.068782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.069162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.069190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.069449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.069477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.069826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.069856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.070224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.070252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.070614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.070642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.071009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.071038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.071388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.071416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.071785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.071813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.072051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.072082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.072441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.072469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.072825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.072854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.073209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.073236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.073502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.073529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.073936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.073965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.074326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.074353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.074719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.074756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.075122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.075150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.075495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.075531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.075882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.075911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.076279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.076306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.076547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.076580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.076967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.076995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.077376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.077404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.077769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.077799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.078148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.078176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.078538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.078566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.078929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.078958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.079325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.079353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.079729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.079766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.907 [2024-11-20 07:43:58.079998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.907 [2024-11-20 07:43:58.080031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.907 qpair failed and we were unable to recover it. 00:29:39.908 [2024-11-20 07:43:58.080404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.908 [2024-11-20 07:43:58.080431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:39.908 qpair failed and we were unable to recover it. 00:29:40.191 [2024-11-20 07:43:58.080799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.191 [2024-11-20 07:43:58.080829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.191 qpair failed and we were unable to recover it. 00:29:40.191 [2024-11-20 07:43:58.081192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.191 [2024-11-20 07:43:58.081228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.191 qpair failed and we were unable to recover it. 00:29:40.191 [2024-11-20 07:43:58.081553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.191 [2024-11-20 07:43:58.081581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.191 qpair failed and we were unable to recover it. 00:29:40.191 [2024-11-20 07:43:58.081931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.191 [2024-11-20 07:43:58.081961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.191 qpair failed and we were unable to recover it. 00:29:40.191 [2024-11-20 07:43:58.082205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.191 [2024-11-20 07:43:58.082235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.191 qpair failed and we were unable to recover it. 00:29:40.191 [2024-11-20 07:43:58.082626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.191 [2024-11-20 07:43:58.082654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.191 qpair failed and we were unable to recover it. 00:29:40.191 [2024-11-20 07:43:58.082997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.191 [2024-11-20 07:43:58.083027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.191 qpair failed and we were unable to recover it. 00:29:40.191 [2024-11-20 07:43:58.083394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.191 [2024-11-20 07:43:58.083422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.191 qpair failed and we were unable to recover it. 00:29:40.191 [2024-11-20 07:43:58.083786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.191 [2024-11-20 07:43:58.083823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.191 qpair failed and we were unable to recover it. 00:29:40.191 [2024-11-20 07:43:58.084195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.191 [2024-11-20 07:43:58.084224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.191 qpair failed and we were unable to recover it. 00:29:40.191 [2024-11-20 07:43:58.084610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.191 [2024-11-20 07:43:58.084637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.191 qpair failed and we were unable to recover it. 00:29:40.191 [2024-11-20 07:43:58.084770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.191 [2024-11-20 07:43:58.084803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.191 qpair failed and we were unable to recover it. 00:29:40.191 [2024-11-20 07:43:58.085204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.191 [2024-11-20 07:43:58.085232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.191 qpair failed and we were unable to recover it. 00:29:40.191 [2024-11-20 07:43:58.085597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.191 [2024-11-20 07:43:58.085624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.191 qpair failed and we were unable to recover it. 00:29:40.191 [2024-11-20 07:43:58.085985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.191 [2024-11-20 07:43:58.086022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.191 qpair failed and we were unable to recover it. 00:29:40.191 [2024-11-20 07:43:58.086396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.191 [2024-11-20 07:43:58.086423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.191 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.086792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.086821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.087232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.087266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.087591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.087619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.087976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.088006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.088376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.088404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.088811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.088840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.089192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.089223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.089477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.089505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.089862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.089892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.090243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.090278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.090603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.090631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.090972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.091001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.091372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.091399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.091660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.091688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.092104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.092133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.092434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.092461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.092846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.092875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.093255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.093282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.093497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.093528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.093911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.093942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.094311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.094339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.094714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.094741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.095107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.095135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.095499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.095527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.095890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.095919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.096278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.096306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.096671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.096698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.097066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.097095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.097473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.097502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.097858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.097887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.098170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.098197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.098544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.098572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.098950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.098984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.099349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.099377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.099619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.099652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.099942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.099971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.100210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.192 [2024-11-20 07:43:58.100240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.192 qpair failed and we were unable to recover it. 00:29:40.192 [2024-11-20 07:43:58.100688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.100717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.101089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.101118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.101474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.101501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.101952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.101982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.102326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.102354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.102734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.102770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.103124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.103153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.103527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.103555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.103929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.103959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.104322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.104349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.104709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.104737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.105101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.105130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.105508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.105536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.105897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.105927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.106109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.106141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.106417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.106444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.106796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.106824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.107087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.107119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.107488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.107515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.107864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.107899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.108279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.108307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.108668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.108696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.109139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.109169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.109528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.109555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.109891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.109921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.110262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.110290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.110655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.110682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.111049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.111078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.111433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.111460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.111809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.111838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.112249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.112277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.112640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.112667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.113042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.113070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.113445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.113472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.113741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.113781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.114017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.114053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.114412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.114439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.114652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.193 [2024-11-20 07:43:58.114683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.193 qpair failed and we were unable to recover it. 00:29:40.193 [2024-11-20 07:43:58.115050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.115081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.115337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.115364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.115779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.115810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.116168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.116196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.116562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.116590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.116974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.117002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.117368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.117396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.117765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.117794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.118162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.118190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.118574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.118601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.118976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.119006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.119407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.119435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.119790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.119819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.120196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.120224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.120522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.120550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.120935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.120964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.121193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.121224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.121500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.121528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.121869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.121897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.122225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.122252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.122508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.122535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.122868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.122897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.123243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.123272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.123616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.123645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.123978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.124009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.124343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.124372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.124739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.124793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.125151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.125179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.125552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.125579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.125928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.125956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.126320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.126348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.126730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.126767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.127127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.127156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.127541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.127569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.127840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.127870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.128239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.128267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.128631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.128659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.129034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.194 [2024-11-20 07:43:58.129070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.194 qpair failed and we were unable to recover it. 00:29:40.194 [2024-11-20 07:43:58.129409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.129436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.129802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.129831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.130209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.130236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.130567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.130595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.130964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.130993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.131350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.131377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.131740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.131779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.132020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.132048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.132478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.132505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.132840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.132870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.133238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.133266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.133573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.133600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.133851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.133880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.134213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.134240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.134612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.134640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.135000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.135029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.135392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.135421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.135667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.135695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.135946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.135977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.136222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.136254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.136511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.136540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.136899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.136929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.137272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.137301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.137664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.137691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.138056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.138086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.138456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.138484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.138820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.138850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.139227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.139255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.139620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.139648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.140045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.140074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.140427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.140454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.140888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.140918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.141300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.141328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.141767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.141795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.142219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.142246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.142606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.142634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.143006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.143035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.195 [2024-11-20 07:43:58.143395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.195 [2024-11-20 07:43:58.143423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.195 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.143796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.143825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.144204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.144238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.144620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.144649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.145015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.145045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.145387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.145416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.145779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.145808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.146174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.146201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.146554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.146581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.146915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.146945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.147306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.147333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.147699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.147727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.148097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.148128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.148464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.148492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.148723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.148765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.149141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.149169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.149537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.149565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.149928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.149958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.150318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.150345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.150596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.150624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.151007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.151036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.151302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.151329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.151600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.151629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.151978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.152008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.152381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.152408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.152781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.152810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.152996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.153029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.153460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.153489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.153846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.153877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.154240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.154274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.154632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.154659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.155041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.196 [2024-11-20 07:43:58.155071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.196 qpair failed and we were unable to recover it. 00:29:40.196 [2024-11-20 07:43:58.155432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.155461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.155827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.155857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.156229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.156257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.156625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.156653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.157020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.157049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.157404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.157432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.157793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.157823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.158183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.158211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.158579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.158608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.158988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.159016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.159385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.159413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.159774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.159804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.160163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.160190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.160567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.160598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.160970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.161001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.161342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.161370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.161733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.161772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.162120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.162150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.162513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.162541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.162804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.162834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.163219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.163248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.163608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.163636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.164010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.164040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.164453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.164481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.164815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.164848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.165098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.165127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.165497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.165525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.165889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.165919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.166176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.166207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.166569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.166599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.166973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.167003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.167362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.167392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.167783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.167815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.168189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.168217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.168580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.168608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.168975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.169007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.169431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.169461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.169804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.197 [2024-11-20 07:43:58.169840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.197 qpair failed and we were unable to recover it. 00:29:40.197 [2024-11-20 07:43:58.170208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.170238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.170471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.170504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.170852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.170887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.171260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.171290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.171552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.171581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.171972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.172001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.172351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.172380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.172742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.172780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.173132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.173169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.173422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.173452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.173804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.173833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.174162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.174191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.174619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.174649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.175036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.175068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.175429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.175458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.175816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.175848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.176211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.176239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.176618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.176647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.177006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.177036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.177378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.177407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.177769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.177800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.178158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.178186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.178554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.178583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.178928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.178959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.179239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.179269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.179512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.179544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.179935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.179967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.180312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.180342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.180682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.180711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.181072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.181102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.181459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.181489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.181866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.181896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.182141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.182175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.182430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.182460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.182823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.182854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.183223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.183251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.183608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.183638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.198 [2024-11-20 07:43:58.183995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.198 [2024-11-20 07:43:58.184025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.198 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.184356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.184386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.184757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.184794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.185144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.185172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.185533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.185561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.185927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.185957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.186321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.186350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.186613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.186641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.187001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.187031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.187414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.187442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.187709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.187736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.188095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.188125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.188486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.188515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.188879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.188910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.189157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.189188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.189376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.189406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.189815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.189845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.190202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.190232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.190611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.190641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.191031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.191062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.191421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.191450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.191816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.191847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.192200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.192229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.192603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.192632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.193011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.193041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.193410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.193441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.193688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.193717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.194127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.194159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.194513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.194542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.194956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.194988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.195340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.195369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.195717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.195760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.196106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.196136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.196486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.196517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.196782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.196812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.197170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.197198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.197566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.197597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.197988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.198018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.199 [2024-11-20 07:43:58.198382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.199 [2024-11-20 07:43:58.198410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.199 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.198774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.198804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.199159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.199196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.199555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.199584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.199922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.199965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.200318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.200347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.200715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.200744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.201134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.201163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.201522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.201551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.201904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.201937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.202248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.202276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.202615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.202643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.202998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.203028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.203377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.203407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.203785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.203831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.204092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.204123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.204497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.204528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.204768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.204800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.205189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.205218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.205583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.205612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.205977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.206010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.206355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.206384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.206754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.206785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.207163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.207192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.207543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.207571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.208032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.208062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.208414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.208443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.208791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.208824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.209180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.209209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.209572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.209600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.209976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.210005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.210376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.210404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.210764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.210795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.211163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.200 [2024-11-20 07:43:58.211194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.200 qpair failed and we were unable to recover it. 00:29:40.200 [2024-11-20 07:43:58.211554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.211586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.211967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.211997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.212373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.212400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.212766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.212799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.213183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.213212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.213469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.213503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.213873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.213903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.214144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.214176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.214434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.214463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.214806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.214836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.215231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.215268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.215641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.215670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.216018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.216048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.216404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.216433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.216804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.216834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.217205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.217233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.217616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.217645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.217981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.218011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.218380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.218407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.218776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.218805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.219216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.219245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.219505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.219532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.219875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.219905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.220201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.220228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.220464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.220496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.220788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.220817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.221195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.221223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.221634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.221663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.221922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.221952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.222329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.222358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.222712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.222740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.223116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.223145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.223510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.223539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.223904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.223933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.224333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.224360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.224740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.224792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.225026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.201 [2024-11-20 07:43:58.225057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.201 qpair failed and we were unable to recover it. 00:29:40.201 [2024-11-20 07:43:58.225418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.225447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.225814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.225844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.226207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.226235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.226604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.226632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.226995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.227024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.227407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.227435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.227785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.227814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.228210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.228238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.228602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.228630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.228997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.229026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.229431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.229459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.229684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.229714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.230079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.230109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.230471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.230506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.230877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.230906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.231253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.231281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.231648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.231677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.232017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.232046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.232411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.232440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.232885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.232914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.233280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.233308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.233677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.233705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.234017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.234047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.234410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.234438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.234815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.234845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.235158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.235186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.235567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.235595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.235846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.235875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.236244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.236273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.202 qpair failed and we were unable to recover it. 00:29:40.202 [2024-11-20 07:43:58.236588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.202 [2024-11-20 07:43:58.236617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.236968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.236997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.237362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.237391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.237760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.237790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.238125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.238152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.238555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.238583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.238998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.239028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.239401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.239428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.239801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.239831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.240209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.240236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.240605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.240633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.240980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.241010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.241379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.241406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.241774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.241803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.242162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.242191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.242432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.242460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.242894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.242923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.243297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.243325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.243688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.243715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.244089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.244118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.244473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.244501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.244865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.244894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.245122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.245153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.245519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.245547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.245909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.245946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.246303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.246331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.246697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.246725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.247104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.247132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.247515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.247543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.247897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.247927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.248293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.248322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.248560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.248590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.248944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.248973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.249351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.249380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.249738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.249775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.250131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.250159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.250385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.250416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.203 [2024-11-20 07:43:58.250670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.203 [2024-11-20 07:43:58.250698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.203 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.251094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.251124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.251486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.251513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.251878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.251908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.252251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.252279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.252593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.252622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.252972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.253001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.253455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.253483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.253712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.253740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.254141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.254169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.254535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.254562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.254923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.254951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.255304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.255332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.255697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.255724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.256011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.256040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.256455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.256483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.256729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.256767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.257151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.257180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.257550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.257578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.257936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.257965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.258326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.258355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.258584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.258616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.258973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.259002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.259254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.259285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.259661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.259689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.260056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.260085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.260444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.260473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.260824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.260861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.261204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.261233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.261607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.261634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.204 [2024-11-20 07:43:58.261989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.204 [2024-11-20 07:43:58.262020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.204 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.262376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.262403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.262775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.262804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.263166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.263195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.263551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.263578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.263955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.263985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.264225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.264255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.264626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.264654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.264989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.265020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.265398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.265425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.265663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.265691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.266054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.266083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.266440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.266469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.266819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.266849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.267213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.267241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.267612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.267639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.267931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.267960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.268218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.268249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.268501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.268529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.268880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.268908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.269259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.269287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.269644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.269672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.269916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.269948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.270304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.270332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.270697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.270726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.271082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.271111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.271477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.271505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.271874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.271905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.272085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.272116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.272485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.272513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.272864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.272893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.273267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.205 [2024-11-20 07:43:58.273295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.205 qpair failed and we were unable to recover it. 00:29:40.205 [2024-11-20 07:43:58.273644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.273679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.274065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.274095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.274341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.274368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.274611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.274639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.274990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.275019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.275390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.275425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.275800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.275829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.276208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.276236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.276557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.276585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.276955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.276983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.277407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.277435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.277797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.277826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.278178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.278207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.278546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.278575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.278928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.278956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.279325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.279352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.279719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.279771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.280011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.280043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.280425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.280453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.280815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.280846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.281203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.281231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.281569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.281598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.281979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.282008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.282375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.282402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.282766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.282795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.283123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.283151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.283537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.283566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.283928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.283956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.284294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.284323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.284686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.284714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.284952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.284981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.285221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.206 [2024-11-20 07:43:58.285253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.206 qpair failed and we were unable to recover it. 00:29:40.206 [2024-11-20 07:43:58.285648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.285678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.286037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.286067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.286433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.286461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.286776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.286805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.287177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.287204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.287575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.287602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.287972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.288001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.288365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.288393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.288642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.288674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.289053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.289083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.289445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.289472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.289825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.289854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.290205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.290233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.290557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.290592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.290949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.290978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.291344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.291372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.291739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.291777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.292021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.292049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.292424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.292451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.292809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.292838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.293201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.293229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.293593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.293621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.294005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.294034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.294375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.294403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.294760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.294789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.295164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.295191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.295531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.295559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.295931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.295961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.296326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.296354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.296722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.296758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.297006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.297037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.297393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.297421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.297785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.297815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.298188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.207 [2024-11-20 07:43:58.298215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.207 qpair failed and we were unable to recover it. 00:29:40.207 [2024-11-20 07:43:58.298563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.298590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.298932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.298962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.299316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.299344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.299708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.299736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.299996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.300027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.300413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.300442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.300676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.300709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.300918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.300948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.301201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.301232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.301522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.301550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.301899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.301929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.302274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.302302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.302664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.302691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.302945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.302975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.303341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.303368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.303738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.303788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.304144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.304172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.304530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.304557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.304924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.304953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.305328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.305362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.305700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.305727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.306162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.306191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.306544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.306572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.306932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.306961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.307321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.307349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.307711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.307739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.308193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.308223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.308558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.308586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.308927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.308956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.309298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.309326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.309690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.309718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.310093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.310122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.310504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.310531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.310900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.310929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.208 qpair failed and we were unable to recover it. 00:29:40.208 [2024-11-20 07:43:58.311278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.208 [2024-11-20 07:43:58.311306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.311655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.311683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.312052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.312082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.312441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.312469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.312825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.312853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.313219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.313247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.313615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.313642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.314041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.314079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.314449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.314477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.314726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.314768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.315146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.315174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.315517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.315545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.315984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.316016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.316227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.316255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.316595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.316623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.316978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.317008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.317441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.317469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.317797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.317827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.318199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.318228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.318601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.318629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.318990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.319018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.319384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.319412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.319660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.319691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.320089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.320119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.320482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.320510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.320867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.320903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.321155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.321183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.321533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.321561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.321945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.321974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.322345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.322372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.322735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.322771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.209 [2024-11-20 07:43:58.323091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.209 [2024-11-20 07:43:58.323118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.209 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.323501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.323529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.323896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.323925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.324138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.324168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.324528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.324556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.324905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.324933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.325316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.325344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.325714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.325742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.326134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.326163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.326525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.326554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.326921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.326950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.327217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.327244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.327676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.327704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.328080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.328109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.328530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.328558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.328968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.328998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.329364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.329391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.329754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.329784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.330116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.330146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.330387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.330418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.330784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.330814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.331205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.331233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.331587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.331613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.331968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.331998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.332359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.332387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.332744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.332784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.333144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.333172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.333542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.333569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.333929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.333958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.334325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.334352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.334715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.334743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.335129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.335157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.335527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.335555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.210 [2024-11-20 07:43:58.335923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.210 [2024-11-20 07:43:58.335951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.210 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.336325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.336358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.336712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.336739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.336992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.337025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.337266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.337294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.337507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.337536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.337872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.337901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.338235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.338263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.338645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.338672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.338926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.338954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.339388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.339416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.339665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.339693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.340112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.340142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.340515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.340544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.340780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.340809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.341164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.341193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.341534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.341562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.341925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.341956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.342364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.342392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.342639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.342666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.343092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.343121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.343493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.343522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.343872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.343900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.344274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.344302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.344671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.344700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.345063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.345092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.345348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.345375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.345638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.345669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.345838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.345870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.346228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.346257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.346624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.346652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.347025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.347053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.347489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.347517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.347926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.347955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.348312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.348339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.348703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.348730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.211 qpair failed and we were unable to recover it. 00:29:40.211 [2024-11-20 07:43:58.349090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.211 [2024-11-20 07:43:58.349119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.349477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.349504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.349874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.349903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.350252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.350280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.350649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.350677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.351018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.351047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.351406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.351433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.351800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.351829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.352062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.352093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.352295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.352325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.352672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.352701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.353073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.353102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.353454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.353482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.353724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.353768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.354107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.354136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.354516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.354544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.354904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.354933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.355288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.355316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.355672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.355700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.355969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.355999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.356362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.356390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.356765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.356794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.357151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.357178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.357533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.357560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.357927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.357956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.358181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.358208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.358588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.358615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.358868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.358901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.359276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.359305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.359642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.359669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.360114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.360144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.360511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.360538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.360905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.360940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.361307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.361335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.361695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.361722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.362096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.362124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.362488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.362516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.362883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.212 [2024-11-20 07:43:58.362912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.212 qpair failed and we were unable to recover it. 00:29:40.212 [2024-11-20 07:43:58.363281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.363309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.363682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.363710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.364093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.364124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.364476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.364504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.364872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.364902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.365276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.365304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.365539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.365566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.365930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.365959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.366226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.366254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.366602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.366629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.366906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.366935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.367174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.367206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.367415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.367445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.367686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.367715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.368095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.368123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.368450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.368478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.368922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.368950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.369331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.369361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.369743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.369779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.370157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.370185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.370554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.370581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.370950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.370980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.371343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.371370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.371732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.371886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.372227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.372256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.372626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.372653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.372990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.373020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.373390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.373418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.373656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.373686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.374060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.213 [2024-11-20 07:43:58.374089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.213 qpair failed and we were unable to recover it. 00:29:40.213 [2024-11-20 07:43:58.374456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.374484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.374846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.374877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.375242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.375270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.375635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.375663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.376026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.376061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.376235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.376264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.376643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.376671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.376969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.376998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.377279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.377307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.377659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.377687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.378023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.378053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.378287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.378317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.378682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.378710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.379063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.379092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.379456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.379484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.379845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.379874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.380133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.380160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.380557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.380585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.380930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.380960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.381213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.381240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.381603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.381630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.382028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.382058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.382424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.382453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.382711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.382740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.383102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.568 [2024-11-20 07:43:58.383131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.568 qpair failed and we were unable to recover it. 00:29:40.568 [2024-11-20 07:43:58.383433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.383461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.383803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.383833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.384231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.384260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.384606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.384643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.385003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.385032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.385412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.385440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.385808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.385837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.386204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.386232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.386487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.386514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.386757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.386789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.387173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.387202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.387568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.387596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.387971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.388000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.388242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.388274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.388616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.388644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.388984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.389013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.389374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.389403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.389644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.389673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.390048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.390079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.390442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.390478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.390844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.390874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.391249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.391277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.391636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.391663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.392033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.392063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.392227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.392254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.392646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.392673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.393005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.393033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.393401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.393429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.393869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.393898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.394311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.394339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.394696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.394724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.395097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.395125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.395479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.395507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.395915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.395944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.396304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.396334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.396704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.396733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.397090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.397119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.397356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.397388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.397767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.569 [2024-11-20 07:43:58.397797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.569 qpair failed and we were unable to recover it. 00:29:40.569 [2024-11-20 07:43:58.398058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.398088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.398436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.398465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.398830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.398859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.399210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.399239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.399581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.399610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.399965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.399998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.400376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.400405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.400782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.400813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.401162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.401190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.401554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.401582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.401926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.401955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.402324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.402355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.402718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.402769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.403124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.403156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.403496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.403525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.403877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.403907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.404265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.404294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.404641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.404670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.405031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.405062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.405312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.405343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.405692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.405728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.406079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.406111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.406477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.406508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.406852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.406883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.407264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.407295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.407646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.407676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.408044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.408076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.408443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.408474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.408825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.408856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.409223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.409253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.409491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.409521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.409877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.409906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.410117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.410148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.410547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.410575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.410934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.410965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.411329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.411358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.411716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.411756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.570 [2024-11-20 07:43:58.412091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.570 [2024-11-20 07:43:58.412121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.570 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.412491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.412521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.412875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.412907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.413310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.413337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.413703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.413732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.414112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.414142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.414379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.414406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.414671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.414703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.415112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.415143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.416872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.416936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.417202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.417236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.417662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.417693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.418102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.418136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.418473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.418506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.418783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.418817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.419185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.419216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.419353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.419384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.419650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.419681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.419965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.419995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.420346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.420375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.420732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.420770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.421113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.421149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.421521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.421551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.421901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.421941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.422272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.422302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.422587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.422617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.422889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.422920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.423720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.423782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.424176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.424206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.424649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.424679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.425074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.425105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.425451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.425479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.425822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.425852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.426182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.426213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.426559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.426589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.426940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.426972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.427345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.427373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.427815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.427846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.571 qpair failed and we were unable to recover it. 00:29:40.571 [2024-11-20 07:43:58.428254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.571 [2024-11-20 07:43:58.428288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.428626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.428655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.429017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.429048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.429407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.429439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.429805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.429836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.430198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.430228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.430591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.430621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.430879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.430907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.431259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.431288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.431658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.431688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.432066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.432096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.432474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.432504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.432862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.432893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.433271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.433300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.433670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.433698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.434005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.434036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.434395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.434425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.434810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.434843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.435147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.435177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.435427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.435454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.435805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.435834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.436125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.436154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.436564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.436592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.436972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.437004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.437352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.437381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.437707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.437743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.438033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.438063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.438425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.438455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.438809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.438840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.439238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.439268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.439617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.439645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.440085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.440114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.572 [2024-11-20 07:43:58.440485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.572 [2024-11-20 07:43:58.440514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.572 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.440775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.440805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.441168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.441196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.441480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.441508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.441882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.441911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.442280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.442309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.442671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.442701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.443160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.443193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.443437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.443471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.443877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.443907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.444274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.444303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.444564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.444594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.444982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.445014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.445387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.445415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.445776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.445805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.446170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.446199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.446559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.446590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.446976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.447007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.447395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.447424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.447782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.447813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.448222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.448252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.448659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.448687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.449057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.449088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.449458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.449487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.449726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.449769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.450133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.450164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.450518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.450546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.450911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.450943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.451311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.451343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.451702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.451730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.452097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.452128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.452489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.452517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.452899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.452929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.453293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.453328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.453679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.453709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.454061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.454094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.454439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.454468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.454822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.454853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.573 qpair failed and we were unable to recover it. 00:29:40.573 [2024-11-20 07:43:58.455208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.573 [2024-11-20 07:43:58.455236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.455603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.455632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.456021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.456051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.456425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.456455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.456793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.456824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.457135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.457162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.457515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.457545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.457909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.457940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.458320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.458348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.458768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.458799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.459149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.459180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.459424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.459457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.459845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.459875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.460262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.460289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.460645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.460673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.461040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.461072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.461424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.461453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.461822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.461851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.462219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.462250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.462620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.462649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.463018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.463047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.463404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.463433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.463876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.463914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.464287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.464317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.464653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.464682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.464945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.464974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.465331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.465359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.465709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.465741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.466106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.466135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.466476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.466507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.466930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.466960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.467300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.467337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.467693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.467723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.468081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.468112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.468471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.468501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.468874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.468910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.469271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.469299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.469544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.469572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.574 qpair failed and we were unable to recover it. 00:29:40.574 [2024-11-20 07:43:58.469928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.574 [2024-11-20 07:43:58.469958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.470205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.470238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.470602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.470632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.470927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.470957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.471363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.471391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.471743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.471793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.472115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.472144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.472548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.472580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.472924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.472954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.473251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.473282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.473644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.473674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.473977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.474009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.474383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.474412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.474777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.474809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.475120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.475150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.475528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.475558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.475769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.475798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.476173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.476203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.476575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.476605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.476975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.477006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.477375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.477404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.477785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.477818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.478171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.478202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.478452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.478482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.478825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.478857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.479198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.479233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.479614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.479643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.479991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.480020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.480382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.480413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.480715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.480756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.481129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.481160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.481600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.481629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.481984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.482017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.482247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.482280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.482502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.482530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.482912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.482944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.483297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.483325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.483698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.575 [2024-11-20 07:43:58.483732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.575 qpair failed and we were unable to recover it. 00:29:40.575 [2024-11-20 07:43:58.484100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.484131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.484502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.484532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.484906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.484936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.485295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.485324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.485591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.485623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.485956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.485987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.486357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.486389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.486729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.486769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.487173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.487202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.487648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.487677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.488044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.488073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.488434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.488463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.488817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.488849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.489195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.489225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.489520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.489551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.489940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.489972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.490345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.490376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.490714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.490743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.491127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.491157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.491526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.491555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.491939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.491971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.492328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.492358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.492791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.492822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.493193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.493221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.493573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.493602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.493968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.493999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.494256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.494289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.494642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.494670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.494925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.494958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.495233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.495262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.495629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.495657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.496027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.496056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.496420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.496449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.496812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.496841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.497243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.497271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.497636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.497663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.498022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.576 [2024-11-20 07:43:58.498053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.576 qpair failed and we were unable to recover it. 00:29:40.576 [2024-11-20 07:43:58.498426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.498454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.498819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.498848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.499204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.499244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.499480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.499511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.499866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.499895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.500262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.500291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.500656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.500685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.501077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.501106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.501359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.501387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.501736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.501789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.502152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.502180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.502309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.502340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.502706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.502734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.503098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.503126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.503355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.503382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.503628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.503655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.503954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.503983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.504210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.504241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.504515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.504543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.504787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.504815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.505210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.505237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.505603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.505631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.505897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.505926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.506255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.506283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.506680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.506708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.507095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.507125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.507460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.507488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.507884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.507914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.508288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.508316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.508765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.508795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.509156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.509185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.509543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.509570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.509941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.509970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.510337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.510365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.510710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.510738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.511051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.511080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.511454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.577 [2024-11-20 07:43:58.511482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.577 qpair failed and we were unable to recover it. 00:29:40.577 [2024-11-20 07:43:58.511852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.511883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.512244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.512272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.512648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.512675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.513048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.513077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.513405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.513433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.513803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.513837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.514192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.514221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.514598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.514625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.514978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.515008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.515439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.515467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.515821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.515870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.516257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.516285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.516655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.516683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.517047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.517076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.517416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.517444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.517826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.517856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.518096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.518124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.518506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.518534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.518893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.518923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.519174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.519203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.519564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.519591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.519966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.519995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.520247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.520279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.520654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.520681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.521070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.521099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.521445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.521472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.521725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.521772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.522153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.522181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.522633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.522660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.522988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.523017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.523388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.523416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.523782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.523812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.524168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.524197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.524570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.524599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.524976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.525005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.525371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.525399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.525843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.525873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.526238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.526265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.526604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.526632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.526975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.527004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.527354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.578 [2024-11-20 07:43:58.527382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.578 qpair failed and we were unable to recover it. 00:29:40.578 [2024-11-20 07:43:58.527742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.527781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.528140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.528169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.528536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.528564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.528906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.528935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.529368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.529401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.529846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.529876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.530238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.530266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.530610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.530637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.530898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.530927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.531305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.531333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.531736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.531792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.532182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.532211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.532539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.532568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.532927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.532958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.533323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.533351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.533717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.533753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.534092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.534120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.534553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.534581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.534928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.534957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.535319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.535347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.535716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.535744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.535886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.535918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.536164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.536195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.536454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.536482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.536842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.536871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.537227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.537254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.537616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.537644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.537997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.538025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.538376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.538403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.538768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.538797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.539162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.539189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.539553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.539581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.539945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.539974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.540331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.540358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.540620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.540648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.540989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.541018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.541252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.541282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.541678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.541708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.542064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.542093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.542453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.542482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.542852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.542882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.543252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.579 [2024-11-20 07:43:58.543280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.579 qpair failed and we were unable to recover it. 00:29:40.579 [2024-11-20 07:43:58.543650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.543679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.544038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.544066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.544436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.544464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.544840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.544870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.545207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.545235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.545602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.545630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.545997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.546027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.546402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.546429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.546664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.546691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.547060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.547090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.547468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.547495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.547863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.547892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.548236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.548264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.548629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.548657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.549022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.549051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.549403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.549431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.549796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.549826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.550294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.550323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.550776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.550806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.551161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.551189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.551543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.551571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.551910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.551939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.552240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.552268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.552624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.552652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.552992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.553022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.553394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.553422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.553731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.553769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.554125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.554154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.554526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.554554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.554919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.554954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.555313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.555341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.555707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.555735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.556113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.556141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.556351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.556383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.556839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.556870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.557267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.557295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.557537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.557564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.557933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.557962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.558376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.558404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.558783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.558815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.558982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.559011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.559377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.559406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.580 qpair failed and we were unable to recover it. 00:29:40.580 [2024-11-20 07:43:58.559771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.580 [2024-11-20 07:43:58.559801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.560073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.560105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.560481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.560509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.560947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.560977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.561336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.561364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.561769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.561798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.562139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.562168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.562506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.562534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.562900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.562929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.563268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.563296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.563694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.563723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.564087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.564115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.564482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.564509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.564889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.564918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.565275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.565303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.565666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.565695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.566045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.566075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.566342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.566370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.566721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.566768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.567098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.567126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.567488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.567516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.567885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.567915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.568264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.568302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.568639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.568667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.569013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.569042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.569375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.569403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.569774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.569803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.570286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.570318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.570542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.570573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.570969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.570999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.571238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.571265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.571626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.571654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.572005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.572036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.572396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.572423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.572775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.572806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.573059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.573086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.573441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.573470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.573811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.573840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.574213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.574240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-11-20 07:43:58.574598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.581 [2024-11-20 07:43:58.574627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.574981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.575009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.575237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.575268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.575655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.575684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.576020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.576049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.576399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.576436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.576788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.576818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.577206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.577234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.577610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.577638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.577994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.578024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.578390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.578418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.578778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.578808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.579174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.579202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.579569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.579597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.579971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.580001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.580260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.580291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.580513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.580545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.580895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.580924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.581257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.581289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.581627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.581655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.582082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.582111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.582474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.582502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.582855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.582892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.583267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.583295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.583658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.583687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.584099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.584129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.584485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.584513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.584773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.584803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.585150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.585186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.585549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.585577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.585955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.585984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.586360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.586387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.586594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.586621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.586862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.586891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.587263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.587291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.587690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.587718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.588090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.588118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.588478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.588507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.588859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.588889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.589254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.589281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.589716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.589752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.590101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.590130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.590499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.590527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-11-20 07:43:58.590874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.582 [2024-11-20 07:43:58.590904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.591270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.591298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.591671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.591699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.592066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.592095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.592443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.592471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.592833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.592864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.593219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.593246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.593605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.593634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.593984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.594013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.594382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.594410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.594787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.594818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.595218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.595246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.595612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.595640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.595995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.596025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.596265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.596294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.596660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.596688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.597029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.597058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.597354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.597382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.597720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.597757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.597996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.598028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.598385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.598414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.598782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.598813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.599171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.599199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.599566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.599594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.599982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.600011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.600401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.600436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.600790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.600818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.601200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.601228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.601500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.601527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.601885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.601913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.602172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.602200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.602556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.602583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.602907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.602937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.603298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.603327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.603697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.603724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.603980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.604008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.604280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.604308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.604651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.604680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.605009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.605038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.605406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.605435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.605801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.605830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.606082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.606113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.606534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.606563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.606933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.606963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.583 qpair failed and we were unable to recover it. 00:29:40.583 [2024-11-20 07:43:58.607323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.583 [2024-11-20 07:43:58.607350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.607708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.607737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.608095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.608124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.608471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.608499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.608859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.608888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.609244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.609274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.609517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.609550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.609963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.609992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.610348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.610377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.610779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.610809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.611173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.611202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.611565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.611593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.611983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.612013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.612246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.612278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.612653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.612680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.613038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.613067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.613436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.613465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.613833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.613862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.614228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.614256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.614637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.614665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.615022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.615051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.615276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.615312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.615681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.615709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.615958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.615987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.616343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.616372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.616758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.616787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.617148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.617177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.617547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.617575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.617947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.617976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.618196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.618228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.618624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.618655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.618994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.619023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.619384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.619412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.619666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.619694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.620071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.620101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.620461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.620491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.620852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.620881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.621245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.621273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.621639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.621667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.622005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.622035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.622467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.622495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.622857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.622886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.623251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.623280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.623617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.584 [2024-11-20 07:43:58.623645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.584 qpair failed and we were unable to recover it. 00:29:40.584 [2024-11-20 07:43:58.624035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.624065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.624431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.624458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.624827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.624856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.625116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.625144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.625509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.625538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.625792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.625821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.626140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.626169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.626542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.626570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.626929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.626958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.627319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.627347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.627704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.627732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.628112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.628140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.628506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.628535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.628885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.628915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.629287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.629314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.629676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.629704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.630063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.630094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.630458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.630491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.630831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.630861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.631091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.631123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.631485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.631513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.631843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.631872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.632276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.632303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.632653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.632683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.633036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.633065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.633413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.633441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.633692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.633724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.634146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.634176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.634522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.634550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.634913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.634942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.635301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.635329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.635682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.635710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.635953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.635985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.636352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.636382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.636757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.636787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.637121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.637149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.637513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.637540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.637903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.637932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.638306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.638334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.638705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.638733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.639023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.639052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.585 [2024-11-20 07:43:58.639397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-11-20 07:43:58.639425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.585 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.639798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.639828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.640192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.640221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.640604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.640633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.641003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.641032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.641404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.641431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.641865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.641893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.642250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.642278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.642663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.642690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.642927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.642956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.643326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.643354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.643717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.643754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.644101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.644129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.644493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.644522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.644787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.644817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.645171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.645199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.645554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.645589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.645949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.645978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.646320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.646347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.646585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.646617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.647002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.647032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.647382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.647410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.647775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.647804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.648158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.648186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.648552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.648580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.648968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.648997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.649379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.649407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.649774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.649804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.650176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.650204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.650558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.650585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.650955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.650986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.651246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.651273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.651651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.651678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.652046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.652075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.652335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.652363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.652743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.652783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.653066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.653096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.653449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.653478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.653824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.653856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.654221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.654249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.654620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.654648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.654912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.654941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.655296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.655327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.655720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.655757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.656192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.656222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.656566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.656596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.656969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.656999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.586 qpair failed and we were unable to recover it. 00:29:40.586 [2024-11-20 07:43:58.657360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-11-20 07:43:58.657389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.657761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.657793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.658072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.658105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.658457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.658487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.658786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.658816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.659166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.659194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.659560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.659588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.659965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.659994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.660347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.660375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.660753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.660789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.661129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.661167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.661403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.661434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.661805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.661835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.662218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.662245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.662611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.662639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.663013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.663042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.663396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.663424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.663784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.663814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.664193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.664220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.664581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.664612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.664976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.665005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.665341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.665369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.665621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.665652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.665943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.665973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.666363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.666392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.666686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.666715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.667115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.667144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.667505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.667534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.667905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.667936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.668303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.668331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.668704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.668732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.669108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.669138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.669507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.669536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.669945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.669975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.670323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.670352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.670724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.670760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.671119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.671149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.671490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.671520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.671890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.671922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.672266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.672294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.672660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.672688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.673044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.673074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.673432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.673461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.673821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.673853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.674212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.674241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.674602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.674632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.674999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.675030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.675386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.675416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.675774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.675804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.587 qpair failed and we were unable to recover it. 00:29:40.587 [2024-11-20 07:43:58.676170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-11-20 07:43:58.676206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.676542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.676579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.676850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.676880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.677265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.677294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.677651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.677680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.678015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.678043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.678382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.678412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.678833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.678862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.679213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.679244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.679615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.679643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.680042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.680072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.680419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.680448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.680802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.680831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.681160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.681188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.681567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.681595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.681942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.681974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.682325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.682353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.682722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.682763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.683126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.683154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.683405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.683433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.683781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.683812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.684202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.684233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.684618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.684649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.685019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.685048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.685372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.685402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.685763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.685793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.686175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.686203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.686507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.686535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.686783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.686814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.687131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.687160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.687509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.687537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.687923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.687954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.688311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.688341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.688709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.688740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.689120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.689151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.689523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.689552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.689879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.689908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.690294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.690324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.690676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.690708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.691094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.691124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.691479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.691514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.691903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.691932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.692278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.692309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.692554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.692582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.692761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.692791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.693161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.693192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.693448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.693476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.693826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.693854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.694233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.694263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.694609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.694640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.588 qpair failed and we were unable to recover it. 00:29:40.588 [2024-11-20 07:43:58.694995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.588 [2024-11-20 07:43:58.695028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.695307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.695335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.695558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.695593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.695917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.695947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.696185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.696213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.696559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.696588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.696975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.697006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.697374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.697402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.697776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.697808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.697973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.698001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.698347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.698377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.698741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.698781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.699148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.699179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.699540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.699568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.699929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.699961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.700322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.700351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.700720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.700757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.701189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.701219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.701454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.701484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.701867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.701896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.702276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.702306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.702554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.702584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.702984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.703016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.703365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.703394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.703737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.703780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.704134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.704171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.704558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.704588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.705031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.705062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.705400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.705431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.705798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.705830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.706173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.706209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.706473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.706504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.706914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.706944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.707333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.707361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.707725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.707761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.708168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.708196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.708625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.708653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.709032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.709062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.709317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.709345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.709640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.709669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.709954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.709982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.710351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.710378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.710737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.710775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.711022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.711053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.711297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.711326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.589 [2024-11-20 07:43:58.711694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.589 [2024-11-20 07:43:58.711722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.589 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.712095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.712124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.712469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.712496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.712767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.712797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.713131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.713159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.713525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.713553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.713813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.713842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.714239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.714268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.714612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.714640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.714992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.715020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.715277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.715305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.715657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.715684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.716054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.716084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.716437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.716465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.716873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.716901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.717278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.717306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.717672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.717701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.718057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.718086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.718498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.718525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.718788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.718818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.719209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.719236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.719494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.719522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.719740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.719782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.720166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.720193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.720543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.720572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.720865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.720900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.721274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.721303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.721660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.721689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.722034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.722064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.722427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.722461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.722829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.722860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.723223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.723252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.723592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.723622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.723996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.724027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.724374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.724402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.724775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.724804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.725153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.725183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.725552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.725580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.725911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.725939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.726325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.726355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.726684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.726713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.727069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.727099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.727455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.727484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.727713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.727755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.728146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.728175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.728532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.728561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.728936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.728966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.729313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.729342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.729714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.729742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.730132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.590 [2024-11-20 07:43:58.730160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.590 qpair failed and we were unable to recover it. 00:29:40.590 [2024-11-20 07:43:58.730518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.730548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.730921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.730951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.731353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.731382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.731742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.731781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.732148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.732177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.732538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.732567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.732910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.732940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.733289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.733317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.733759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.733789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.734180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.734208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.734555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.734583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.734923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.734952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.735206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.735239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.735590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.735618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.735997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.736027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.736387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.736419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.736802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.736834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.737161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.737190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.737554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.737582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.737928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.737957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.738304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.738332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.738690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.738721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.739138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.739167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.739536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.739568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.739945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.739974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.740340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.740367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.740609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.740641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.740842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.740874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.741241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.741270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.741647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.741677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.742055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.742086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.742330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.742358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.742728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.742766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.743136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.743165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.743413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.743445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.743787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.743817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.744181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.744212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.744580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.744608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.744977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.745007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.745249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.745277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.745666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.745694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.746110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.746140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.746479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.746514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.746776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.746805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.747189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.747216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.747484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.747512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.747872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.747908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.748177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.748206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.748427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.748459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.748826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.748856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.749221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.749251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.749608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.591 [2024-11-20 07:43:58.749637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.591 qpair failed and we were unable to recover it. 00:29:40.591 [2024-11-20 07:43:58.750007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.750037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.750404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.750433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.752367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.752431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.752870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.752906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.753297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.753331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.753580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.753613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.753975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.754007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.754356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.754385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.754669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.754698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.755099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.755130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.757061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.757121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.757421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.757457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.757824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.757857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.758216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.758246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.758493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.758526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.758880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.758910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.759155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.759183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.759556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.759586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.759953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.759982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.760352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.760381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.760645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.760678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.761038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.761067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.761429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.761457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.761815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.761847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.762238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.762269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.762633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.762664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.763032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.763062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.763394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.763422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.763789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.763818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.592 [2024-11-20 07:43:58.764182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.592 [2024-11-20 07:43:58.764210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.592 qpair failed and we were unable to recover it. 00:29:40.865 [2024-11-20 07:43:58.764576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-11-20 07:43:58.764615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-11-20 07:43:58.764972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-11-20 07:43:58.765002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-11-20 07:43:58.765347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-11-20 07:43:58.765375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-11-20 07:43:58.765763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-11-20 07:43:58.765794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-11-20 07:43:58.766142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-11-20 07:43:58.766170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-11-20 07:43:58.766511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-11-20 07:43:58.766539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-11-20 07:43:58.766907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-11-20 07:43:58.766936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-11-20 07:43:58.767290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-11-20 07:43:58.767319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-11-20 07:43:58.767584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-11-20 07:43:58.767612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-11-20 07:43:58.767988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-11-20 07:43:58.768017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-11-20 07:43:58.768390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-11-20 07:43:58.768418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.865 qpair failed and we were unable to recover it. 00:29:40.865 [2024-11-20 07:43:58.768783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.865 [2024-11-20 07:43:58.768812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.769194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.769222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.769506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.769534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.769907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.769938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.770275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.770304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.770669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.770696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.771066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.771096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.771454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.771483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.771823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.771853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.772232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.772260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.772621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.772649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.773010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.773039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.773399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.773428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.773796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.773825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.774201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.774228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.774581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.774609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.774859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.774893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.775265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.775293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.775648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.775677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.776049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.776079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.776436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.776464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.776824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.776853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.777221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.777249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.777497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.777528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.777894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.777925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.778293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.778322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.778678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.778707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.779074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.779105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.779477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.779515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.779890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.779927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.780300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.780328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.780681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.780709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.781087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.781117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.781488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.781515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.781882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.781911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.782281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.782309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.782758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.782787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.866 [2024-11-20 07:43:58.783156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.866 [2024-11-20 07:43:58.783184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.866 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.783550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.783578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.783943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.783972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.784322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.784350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.784710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.784738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.785117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.785144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.785386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.785414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.785844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.785873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.786234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.786262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.786635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.786663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.786917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.786949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.787296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.787325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.787690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.787718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.788090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.788118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.788487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.788515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.788877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.788907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.789284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.789312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.789682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.789710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.790071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.790101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.790460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.790489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.790858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.790888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.791264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.791291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.791646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.791673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.792036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.792065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.792425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.792453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.792820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.792850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.793225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.793252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.793620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.793648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.793902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.793931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.794288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.794315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.794677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.794705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.795070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.795098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.795340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.795377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.795721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.795758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.796135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.796163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.796442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.796469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.796838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.796868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.797233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.867 [2024-11-20 07:43:58.797260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.867 qpair failed and we were unable to recover it. 00:29:40.867 [2024-11-20 07:43:58.797629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.797656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.798023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.798054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.798417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.798445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.798807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.798836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.799190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.799218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.799562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.799590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.800027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.800056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.800388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.800416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.800712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.800739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.801106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.801136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.801376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.801408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.801774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.801804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.802087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.802115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.802462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.802490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.802833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.802861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.803243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.803271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.803642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.803671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.804020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.804050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.804426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.804454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.804828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.804857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.805225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.805251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.805606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.805635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.805977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.806008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.806346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.806374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.806741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.806782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.807147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.807174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.807546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.807575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.807930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.807958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.808306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.808336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.810246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.810304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.810740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.810791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.811143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.811171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.811500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.811528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.811895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.811927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.812278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.812314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.812670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.812699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.813038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.868 [2024-11-20 07:43:58.813067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.868 qpair failed and we were unable to recover it. 00:29:40.868 [2024-11-20 07:43:58.813419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.813447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.813817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.813847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.814095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.814126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.814505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.814533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.814899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.814932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.815304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.815333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.815572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.815604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.815968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.815999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.816363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.816392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.816758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.816788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.817134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.817162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.817521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.817550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.817928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.817961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.818318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.818346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.818711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.818739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.819187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.819215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.819581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.819609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.819984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.820015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.820259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.820291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.820626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.820658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.821003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.821032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.821404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.821431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.821791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.821821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.822207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.822234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.822611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.822640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.822995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.823024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.823419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.823447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.823779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.823810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.824140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.824168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.824525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.824554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.824908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.824937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.825284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.825313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.825555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.825587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.825957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.825988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.826344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.826375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.826734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.826779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.827109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.869 [2024-11-20 07:43:58.827139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.869 qpair failed and we were unable to recover it. 00:29:40.869 [2024-11-20 07:43:58.827474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.827511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.827902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.827934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.828306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.828336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.828714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.828743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.829137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.829168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.829516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.829545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.829885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.829914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.830282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.830311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.830676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.830704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.831013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.831043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.831414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.831443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.831789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.831819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.831988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.832017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.832399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.832427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.832700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.832729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.832908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.832941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.833316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.833344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.833710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.833739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.834046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.834075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.834414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.834443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.834698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.834728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.835101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.835130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.835495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.835524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.835912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.835942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.836318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.836346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.836721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.836758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.837160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.837188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.837523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.837551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.837896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.837925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.838383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.838411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.838784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.838814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.870 [2024-11-20 07:43:58.839188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.870 [2024-11-20 07:43:58.839217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.870 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.839566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.839594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.839861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.839892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.840249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.840278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.840652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.840681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.841031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.841060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.841425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.841453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.841807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.841837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.842171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.842199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.842573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.842608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.842962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.842991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.843372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.843401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.843768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.843799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.844167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.844195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.844552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.844580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.844894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.844924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.845273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.845301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.845647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.845676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.846036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.846065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.846426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.846454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.846828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.846858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.847266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.847294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.847515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.847543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.847932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.847962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.848196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.848223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.848512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.848540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.848882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.848912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.849153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.849185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.849497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.849525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.849927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.849956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.850306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.850333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.850709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.850737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.851023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.851051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.851404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.851433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.851810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.851840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.852183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.852219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.852557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.852587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.871 [2024-11-20 07:43:58.852945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.871 [2024-11-20 07:43:58.852976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.871 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.853218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.853245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.853589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.853617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.853863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.853896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.854289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.854317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.854586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.854613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.854971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.855000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.855373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.855401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.855765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.855793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.856002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.856033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.856402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.856430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.856817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.856847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.857179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.857213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.857587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.857614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.857969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.857998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.858380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.858407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.858787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.858818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.859087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.859119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.859476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.859503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.859874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.859904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.860265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.860292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.860650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.860679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.860926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.860959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.861326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.861354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.861715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.861752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.862083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.862111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.862480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.862510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.862880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.862910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.863289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.863317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.863576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.863603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.863967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.863997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.864399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.864428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.864810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.864840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.865213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.865241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.865692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.865720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.866088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.866117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.866476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.866505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.866866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.872 [2024-11-20 07:43:58.866894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.872 qpair failed and we were unable to recover it. 00:29:40.872 [2024-11-20 07:43:58.867329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.867359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.867703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.867734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.868116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.868145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.868516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.868544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.869026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.869056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.869416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.869444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.869808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.869837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.870128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.870156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.870526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.870554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.870917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.870945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.871203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.871230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.871587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.871615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.871973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.872003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.872366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.872395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.872765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.872800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.873046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.873078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.873446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.873474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.873829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.873860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.874270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.874298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.874656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.874683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.875043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.875073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.875439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.875467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.875837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.875865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.876241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.876269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.876620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.876648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.876995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.877025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.877370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.877398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.877764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.877792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.878157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.878186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.878552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.878582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.878932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.878962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.879326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.879354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.879719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.879755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.880144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.880172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.880583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.880613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.880957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.880986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.881366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.881394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.881765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.873 [2024-11-20 07:43:58.881794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.873 qpair failed and we were unable to recover it. 00:29:40.873 [2024-11-20 07:43:58.882154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.882182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.882457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.882486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.882811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.882841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.883132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.883161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.883513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.883541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.883886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.883914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.884278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.884307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.884666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.884694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.885102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.885131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.885473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.885503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.885870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.885901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.886250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.886278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.886637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.886665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.887007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.887036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.887406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.887434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.887796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.887826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.888202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.888236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.888475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.888503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.888874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.888904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.889267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.889295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.889675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.889705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.890064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.890095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.890456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.890483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.890853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.890882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.891243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.891272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.891637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.891667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.892012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.892041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.892416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.892443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.892804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.892833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.893202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.893230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.893600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.893628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.893892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.893925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.894293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.894322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.874 qpair failed and we were unable to recover it. 00:29:40.874 [2024-11-20 07:43:58.894686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.874 [2024-11-20 07:43:58.894715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.895119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.895149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.895423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.895450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.895800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.895829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.896204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.896231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.896446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.896474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.896857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.896886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.897249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.897276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.897526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.897555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.897838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.897866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.898236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.898264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.898628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.898657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.898992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.899022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.899390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.899418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.899777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.899806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.900055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.900082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.900343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.900375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.900725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.900763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.901114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.901143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.901508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.901536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.901797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.901829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.902175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.902203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.902566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.902594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.902951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.902986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.903345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.903375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.903744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.903785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.904151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.904179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.904545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.904576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.904929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.904961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.905318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.905347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.905725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.905763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.907599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.907663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.908149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.908185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.908597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.908627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.908969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.909000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.909353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.909383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.909767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.875 [2024-11-20 07:43:58.909796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.875 qpair failed and we were unable to recover it. 00:29:40.875 [2024-11-20 07:43:58.910207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.910236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.910605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.910634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.910996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.911028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.911258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.911287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.911699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.911727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.911912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.911948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.912191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.912223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.912580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.912610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.912974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.913007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.913368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.913399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.913775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.913805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.914058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.914087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.914478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.914508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.914863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.914897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.915236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.915267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.917034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.917097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.917525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.917560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.919299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.919354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.919777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.919811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.920229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.920258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.920621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.920652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.921006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.921039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.921399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.921430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.921802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.921833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.922203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.922232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.922487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.922518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.922924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.922963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.923310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.923339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.923708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.923739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.924175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.924205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.924531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.924562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.924931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.924961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.925326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.925356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.925714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.925742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.926125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.926156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.926508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.926539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.926882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.926913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.927317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.876 [2024-11-20 07:43:58.927345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.876 qpair failed and we were unable to recover it. 00:29:40.876 [2024-11-20 07:43:58.927698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.927728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.928106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.928136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.928580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.928612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.928986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.929016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.929382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.929411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.929775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.929805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.930152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.930181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.930549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.930577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.930838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.930869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.931255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.931286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.931645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.931675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.933423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.933482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.933845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.933883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.934258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.934288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.934591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.934621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.935029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.935062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.935351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.935381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.935733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.935771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.936137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.936168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.936535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.936567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.936943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.936972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.937335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.937363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.937729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.937769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.938172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.938205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.938545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.938575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.938929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.938960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.939321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.939351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.939730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.939799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.940179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.940208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.940568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.940598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.940974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.941004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.941244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.941276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.943106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.943166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.943596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.943630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.944008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.944043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.944462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.944490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.944853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.944884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.877 qpair failed and we were unable to recover it. 00:29:40.877 [2024-11-20 07:43:58.945247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.877 [2024-11-20 07:43:58.945276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.945514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.945544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.945895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.945924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.946325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.946354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.946723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.946763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.947174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.947205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.947448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.947476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.947914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.947946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.948282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.948312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.948529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.948558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.948833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.948864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.949131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.949161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.949516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.949546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.949922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.949953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.950337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.950366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.950758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.950788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.951142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.951171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.951533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.951562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.951964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.952000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.952384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.952415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.952767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.952798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.953096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.953124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.953482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.953511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.953772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.953801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.954149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.954178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.954429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.954458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.954712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.954742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.954947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.954978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.955394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.955423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.955810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.955841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.956195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.956224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.956585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.956614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.956908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.878 [2024-11-20 07:43:58.956939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.878 qpair failed and we were unable to recover it. 00:29:40.878 [2024-11-20 07:43:58.957304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.957334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.957709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.957738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.958134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.958164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.958396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.958425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.958790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.958820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.959054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.959083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.959443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.959473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.959865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.959897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.960336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.960364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.960734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.960773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.961119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.961151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.961517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.961545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.961815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.961846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.962210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.962241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.962609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.962639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.962982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.963013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.963391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.963420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.963828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.963859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.964105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.964138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.964486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.964514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.964792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.964821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.965090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.965118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.965479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.965507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.965897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.965925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.966382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.966411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.966792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.966829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.967220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.967248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.967618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.967646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.879 qpair failed and we were unable to recover it. 00:29:40.879 [2024-11-20 07:43:58.967948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.879 [2024-11-20 07:43:58.967978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.968361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.968389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.968762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.968792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.969126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.969155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.969516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.969546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.969891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.969920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.970180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.970208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.970562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.970591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.970958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.970987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.971334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.971363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.971796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.971826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.972206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.972235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.972605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.972633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.972886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.972915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.973281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.973310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.973669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.973697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.974037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.974067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.974447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.974476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.974843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.974872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.975118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.975149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.975504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.975532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.975873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.975903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.976281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.976310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.976570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.976602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.977063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.977093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.977451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.977480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.977840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.977870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.978223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.978251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.978630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.978659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.979043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.979073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.979337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.979365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.979716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.979744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.980102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.980130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.980512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.980540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.980892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.980922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.981303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.981332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.981692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.981721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.880 [2024-11-20 07:43:58.981988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.880 [2024-11-20 07:43:58.982027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.880 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.982395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.982424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.982775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.982805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.983152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.983181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.983550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.983580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.983934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.983963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.984335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.984363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.984720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.984758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.985113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.985141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.985509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.985538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.985880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.985911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.986156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.986188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.986454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.986483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.986827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.986857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.987311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.987339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.987544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.987575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.987956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.987985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.988352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.988381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.988762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.988792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.989201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.989229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.989603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.989631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.990008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.990038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.990417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.990448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.990694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.990724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.991064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.991094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.991450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.991478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.991840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.991870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.992247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.992276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.992642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.992671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.993055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.993085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.993452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.993480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.993850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.993880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.994196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.994225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.994592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.994621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.995048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.995078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.995475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.995502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.995838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.995867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.996135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.881 [2024-11-20 07:43:58.996163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.881 qpair failed and we were unable to recover it. 00:29:40.881 [2024-11-20 07:43:58.996561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:58.996589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:58.996993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:58.997022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:58.997388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:58.997423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:58.997786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:58.997816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:58.998161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:58.998189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:58.998558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:58.998586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:58.998961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:58.998991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:58.999375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:58.999404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:58.999785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:58.999815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.000159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.000188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.000430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.000458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.000825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.000855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.001210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.001239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.001631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.001661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.002018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.002048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.002422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.002449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.002814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.002844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.003244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.003274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.003639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.003668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.004037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.004068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.004438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.004467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.004781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.004811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.005161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.005189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.005473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.005501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.005856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.005886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.006258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.006287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.006648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.006676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.007012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.007041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.007410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.007438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.007810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.007840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.008201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.008231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.008595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.008624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.008863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.008895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.009278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.009306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.009668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.009696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.010062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.010091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.010450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.010479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.882 [2024-11-20 07:43:59.010922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.882 [2024-11-20 07:43:59.010953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.882 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.011316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.011343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.011716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.011760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.012128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.012158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.012540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.012569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.012947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.012984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.013339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.013368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.013734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.013770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.014128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.014156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.014405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.014436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.014857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.014887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.015248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.015277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.015638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.015668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.016004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.016034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.016399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.016427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.016846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.016875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.017217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.017246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.017637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.017665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.018028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.018058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.018452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.018481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.018838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.018868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.019274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.019302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.019663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.019693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.020050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.020082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.020442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.020470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.020797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.020826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.021216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.021245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.021612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.021641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.022001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.022031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.022368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.022398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.022805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.022834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.023186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.023215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.023586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.023614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.023988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.024019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.024382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.024410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.024783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.024814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.025161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.025190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.025554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.883 [2024-11-20 07:43:59.025582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.883 qpair failed and we were unable to recover it. 00:29:40.883 [2024-11-20 07:43:59.025924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.025954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.026310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.026340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.026708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.026738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.027098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.027126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.027496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.027525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.027894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.027924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.028163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.028195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.028453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.028487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.028839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.028870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.029238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.029266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.029616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.029644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.029989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.030018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.030437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.030466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.030862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.030892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.031270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.031298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.031664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.031692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.032054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.032083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.032448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.032478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.032851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.032881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.033227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.033256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.033615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.033643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.034029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.034059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.034398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.034427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.034792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.034821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.035186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.035214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.035554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.035585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.035918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.035949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.036324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.036351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.036721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.036759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.037111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.037139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.037498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.037528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.037888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.037918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.038171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.884 [2024-11-20 07:43:59.038199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.884 qpair failed and we were unable to recover it. 00:29:40.884 [2024-11-20 07:43:59.038545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.038573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.038837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.038867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.039223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.039251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.039618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.039646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.039991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.040021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.040380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.040408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.040676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.040704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.041099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.041128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.041485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.041513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.041904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.041934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.042298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.042326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.042693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.042721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.042973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.043005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.043353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.043381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.043776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.043813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.044114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.044144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.044509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.044538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.046394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.046454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.046860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.046897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.047260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.047288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.047653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.047682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.048055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.048084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.048454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.048482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.048829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.048858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.049221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.049250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.049616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.049644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.049993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.050022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.050331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.050360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.050728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.050767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.051142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.051170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.051459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.051489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.051847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.051877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.052244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.052272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.052630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.052659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.053034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.053064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.053427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.053455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.053823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.053853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.055684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.885 [2024-11-20 07:43:59.055743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.885 qpair failed and we were unable to recover it. 00:29:40.885 [2024-11-20 07:43:59.056224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-20 07:43:59.056258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-20 07:43:59.056622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-20 07:43:59.056651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-20 07:43:59.056998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-20 07:43:59.057028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-20 07:43:59.057403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-20 07:43:59.057432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-20 07:43:59.057797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-20 07:43:59.057827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-20 07:43:59.058214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-20 07:43:59.058242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-20 07:43:59.058601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-20 07:43:59.058631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-20 07:43:59.058891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-20 07:43:59.058924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:40.886 [2024-11-20 07:43:59.059295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.886 [2024-11-20 07:43:59.059323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:40.886 qpair failed and we were unable to recover it. 00:29:41.160 [2024-11-20 07:43:59.059684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-11-20 07:43:59.059715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-11-20 07:43:59.060104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-11-20 07:43:59.060133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-11-20 07:43:59.060456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-11-20 07:43:59.060483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-11-20 07:43:59.060848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.160 [2024-11-20 07:43:59.060879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.160 qpair failed and we were unable to recover it. 00:29:41.160 [2024-11-20 07:43:59.061242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-11-20 07:43:59.061271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-11-20 07:43:59.061631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-11-20 07:43:59.061660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-11-20 07:43:59.062043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-11-20 07:43:59.062074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-11-20 07:43:59.062431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-11-20 07:43:59.062466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-11-20 07:43:59.062889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-11-20 07:43:59.062919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-11-20 07:43:59.063292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-11-20 07:43:59.063322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-11-20 07:43:59.063681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-11-20 07:43:59.063710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-11-20 07:43:59.063998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-11-20 07:43:59.064030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-11-20 07:43:59.064365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.161 [2024-11-20 07:43:59.064392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.161 qpair failed and we were unable to recover it. 00:29:41.161 [2024-11-20 07:43:59.064767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-11-20 07:43:59.064796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-11-20 07:43:59.065194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-11-20 07:43:59.065222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-11-20 07:43:59.065587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-11-20 07:43:59.065616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-11-20 07:43:59.065994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-11-20 07:43:59.066024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-11-20 07:43:59.066319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-11-20 07:43:59.066347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-11-20 07:43:59.066694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-11-20 07:43:59.066723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-11-20 07:43:59.066986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-11-20 07:43:59.067019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-11-20 07:43:59.067311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-11-20 07:43:59.067341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-11-20 07:43:59.067707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.162 [2024-11-20 07:43:59.067737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.162 qpair failed and we were unable to recover it. 00:29:41.162 [2024-11-20 07:43:59.068097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-11-20 07:43:59.068125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-11-20 07:43:59.068489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-11-20 07:43:59.068517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-11-20 07:43:59.068883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-11-20 07:43:59.068912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-11-20 07:43:59.069280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-11-20 07:43:59.069308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-11-20 07:43:59.069664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-11-20 07:43:59.069693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-11-20 07:43:59.070057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-11-20 07:43:59.070087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-11-20 07:43:59.070463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-11-20 07:43:59.070491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-11-20 07:43:59.070847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.163 [2024-11-20 07:43:59.070876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.163 qpair failed and we were unable to recover it. 00:29:41.163 [2024-11-20 07:43:59.071212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-11-20 07:43:59.071240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-11-20 07:43:59.071609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-11-20 07:43:59.071637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-11-20 07:43:59.072058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-11-20 07:43:59.072091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.168 [2024-11-20 07:43:59.072458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.168 [2024-11-20 07:43:59.072487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.168 qpair failed and we were unable to recover it. 00:29:41.169 [2024-11-20 07:43:59.072732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-11-20 07:43:59.072770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-11-20 07:43:59.073156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-11-20 07:43:59.073186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-11-20 07:43:59.073551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-11-20 07:43:59.073579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-11-20 07:43:59.073941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-11-20 07:43:59.073971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-11-20 07:43:59.074220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-11-20 07:43:59.074251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.169 [2024-11-20 07:43:59.074608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.169 [2024-11-20 07:43:59.074637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.169 qpair failed and we were unable to recover it. 00:29:41.170 [2024-11-20 07:43:59.074983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-11-20 07:43:59.075013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-11-20 07:43:59.075382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-11-20 07:43:59.075410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-11-20 07:43:59.075760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-11-20 07:43:59.075791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-11-20 07:43:59.076082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-11-20 07:43:59.076111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-11-20 07:43:59.076470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-11-20 07:43:59.076498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-11-20 07:43:59.076793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-11-20 07:43:59.076821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-11-20 07:43:59.077191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-11-20 07:43:59.077219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-11-20 07:43:59.077601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-11-20 07:43:59.077635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-11-20 07:43:59.077981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-11-20 07:43:59.078010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.170 qpair failed and we were unable to recover it. 00:29:41.170 [2024-11-20 07:43:59.078374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.170 [2024-11-20 07:43:59.078402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-20 07:43:59.078775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-20 07:43:59.078806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-20 07:43:59.079053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-20 07:43:59.079085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-20 07:43:59.079462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-20 07:43:59.079490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-20 07:43:59.079861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-20 07:43:59.079891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-20 07:43:59.080266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-20 07:43:59.080295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-20 07:43:59.080650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-20 07:43:59.080679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-20 07:43:59.081040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-20 07:43:59.081071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-20 07:43:59.081444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-20 07:43:59.081473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-20 07:43:59.081841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-20 07:43:59.081870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.171 qpair failed and we were unable to recover it. 00:29:41.171 [2024-11-20 07:43:59.082233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.171 [2024-11-20 07:43:59.082260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-20 07:43:59.082638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-20 07:43:59.082667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-20 07:43:59.082900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-20 07:43:59.082933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-20 07:43:59.083201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-20 07:43:59.083229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-20 07:43:59.083480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-20 07:43:59.083508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-20 07:43:59.083857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-20 07:43:59.083886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-20 07:43:59.084254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-20 07:43:59.084282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.172 qpair failed and we were unable to recover it. 00:29:41.172 [2024-11-20 07:43:59.084645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.172 [2024-11-20 07:43:59.084672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-20 07:43:59.085017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-20 07:43:59.085047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-20 07:43:59.085422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-20 07:43:59.085451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-20 07:43:59.085811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-20 07:43:59.085840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-20 07:43:59.086228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-20 07:43:59.086256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-20 07:43:59.086618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-20 07:43:59.086645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-20 07:43:59.086913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-20 07:43:59.086943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-20 07:43:59.087295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-20 07:43:59.087323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-20 07:43:59.087697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.173 [2024-11-20 07:43:59.087728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.173 qpair failed and we were unable to recover it. 00:29:41.173 [2024-11-20 07:43:59.088126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-20 07:43:59.088155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-20 07:43:59.088492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-20 07:43:59.088519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-20 07:43:59.088865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-20 07:43:59.088895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-20 07:43:59.089264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-20 07:43:59.089292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-20 07:43:59.089655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-20 07:43:59.089684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-20 07:43:59.090062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-20 07:43:59.090092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-20 07:43:59.090461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-20 07:43:59.090489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-20 07:43:59.090825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-20 07:43:59.090855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.174 qpair failed and we were unable to recover it. 00:29:41.174 [2024-11-20 07:43:59.091212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.174 [2024-11-20 07:43:59.091240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-20 07:43:59.091614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-20 07:43:59.091641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-20 07:43:59.091900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-20 07:43:59.091929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-20 07:43:59.092318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-20 07:43:59.092347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-20 07:43:59.092710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-20 07:43:59.092744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-20 07:43:59.093159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-20 07:43:59.093188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.175 [2024-11-20 07:43:59.093554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.175 [2024-11-20 07:43:59.093583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.175 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-20 07:43:59.093939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-20 07:43:59.093969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-20 07:43:59.094269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-20 07:43:59.094298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-20 07:43:59.094591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-20 07:43:59.094620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-20 07:43:59.095005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-20 07:43:59.095035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-20 07:43:59.095381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-20 07:43:59.095409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-20 07:43:59.095785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-20 07:43:59.095816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-20 07:43:59.095967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-20 07:43:59.095997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.176 qpair failed and we were unable to recover it. 00:29:41.176 [2024-11-20 07:43:59.096376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.176 [2024-11-20 07:43:59.096405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-20 07:43:59.096808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-20 07:43:59.096838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-20 07:43:59.097076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-20 07:43:59.097104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-20 07:43:59.097458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-20 07:43:59.097485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-20 07:43:59.097737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-20 07:43:59.097777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-20 07:43:59.098113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.177 [2024-11-20 07:43:59.098142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.177 qpair failed and we were unable to recover it. 00:29:41.177 [2024-11-20 07:43:59.098405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-20 07:43:59.098434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-20 07:43:59.098797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-20 07:43:59.098829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-20 07:43:59.099211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.178 [2024-11-20 07:43:59.099241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.178 qpair failed and we were unable to recover it. 00:29:41.178 [2024-11-20 07:43:59.099601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-20 07:43:59.099629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-20 07:43:59.100008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-20 07:43:59.100037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-20 07:43:59.100415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-20 07:43:59.100444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-20 07:43:59.100805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-20 07:43:59.100835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.179 [2024-11-20 07:43:59.101163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.179 [2024-11-20 07:43:59.101191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.179 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-20 07:43:59.101557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-20 07:43:59.101585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-20 07:43:59.101963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-20 07:43:59.101992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-20 07:43:59.102366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-20 07:43:59.102393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-20 07:43:59.102734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-20 07:43:59.102775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-20 07:43:59.103006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.180 [2024-11-20 07:43:59.103039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.180 qpair failed and we were unable to recover it. 00:29:41.180 [2024-11-20 07:43:59.103420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-20 07:43:59.103448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-20 07:43:59.103815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-20 07:43:59.103845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-20 07:43:59.104203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-20 07:43:59.104231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-20 07:43:59.104491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-20 07:43:59.104518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-20 07:43:59.104883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-20 07:43:59.104912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-20 07:43:59.105276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-20 07:43:59.105306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-20 07:43:59.105671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.181 [2024-11-20 07:43:59.105699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.181 qpair failed and we were unable to recover it. 00:29:41.181 [2024-11-20 07:43:59.105956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-20 07:43:59.105986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-20 07:43:59.106401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-20 07:43:59.106429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-20 07:43:59.106793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-20 07:43:59.106821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-20 07:43:59.107173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-20 07:43:59.107200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-20 07:43:59.107567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-20 07:43:59.107602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-20 07:43:59.107855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-20 07:43:59.107888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.182 [2024-11-20 07:43:59.108259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.182 [2024-11-20 07:43:59.108286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.182 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-20 07:43:59.108652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-20 07:43:59.108681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-20 07:43:59.109022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-20 07:43:59.109052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-20 07:43:59.109432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-20 07:43:59.109461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-20 07:43:59.109863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-20 07:43:59.109893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-20 07:43:59.110131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-20 07:43:59.110160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-20 07:43:59.110512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-20 07:43:59.110539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-20 07:43:59.110909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-20 07:43:59.110938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-20 07:43:59.111283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-20 07:43:59.111311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.183 qpair failed and we were unable to recover it. 00:29:41.183 [2024-11-20 07:43:59.111678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.183 [2024-11-20 07:43:59.111707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-20 07:43:59.112102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-20 07:43:59.112133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-20 07:43:59.112421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-20 07:43:59.112449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-20 07:43:59.112819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-20 07:43:59.112849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-20 07:43:59.113194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-20 07:43:59.113222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-20 07:43:59.113594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-20 07:43:59.113621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-20 07:43:59.113885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-20 07:43:59.113914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-20 07:43:59.114283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-20 07:43:59.114313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-20 07:43:59.114690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-20 07:43:59.114719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-20 07:43:59.115102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-20 07:43:59.115131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-20 07:43:59.115503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-20 07:43:59.115531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-20 07:43:59.115815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-20 07:43:59.115845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-20 07:43:59.116251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-20 07:43:59.116278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-20 07:43:59.116643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.184 [2024-11-20 07:43:59.116672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.184 qpair failed and we were unable to recover it. 00:29:41.184 [2024-11-20 07:43:59.117013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-20 07:43:59.117043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-20 07:43:59.117418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-20 07:43:59.117446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-20 07:43:59.117812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-20 07:43:59.117842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-20 07:43:59.118074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-20 07:43:59.118106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-20 07:43:59.118473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-20 07:43:59.118502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-20 07:43:59.118860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-20 07:43:59.118889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-20 07:43:59.119235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-20 07:43:59.119264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-20 07:43:59.119631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-20 07:43:59.119659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-20 07:43:59.119886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-20 07:43:59.119918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-20 07:43:59.120403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-20 07:43:59.120432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-20 07:43:59.120802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-20 07:43:59.120833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-20 07:43:59.121233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-20 07:43:59.121262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-20 07:43:59.121621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-20 07:43:59.121649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-20 07:43:59.121999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-20 07:43:59.122028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-20 07:43:59.122401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-20 07:43:59.122430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-20 07:43:59.122790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-20 07:43:59.122825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.185 qpair failed and we were unable to recover it. 00:29:41.185 [2024-11-20 07:43:59.123215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.185 [2024-11-20 07:43:59.123245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-20 07:43:59.123603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-20 07:43:59.123632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-20 07:43:59.123922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-20 07:43:59.123951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-20 07:43:59.124308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-20 07:43:59.124335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-20 07:43:59.124781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-20 07:43:59.124811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-20 07:43:59.125215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-20 07:43:59.125243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-20 07:43:59.125611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-20 07:43:59.125641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-20 07:43:59.126018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-20 07:43:59.126048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-20 07:43:59.126413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-20 07:43:59.126440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-20 07:43:59.126794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-20 07:43:59.126823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.186 qpair failed and we were unable to recover it. 00:29:41.186 [2024-11-20 07:43:59.127199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.186 [2024-11-20 07:43:59.127226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-20 07:43:59.127584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-20 07:43:59.127613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-20 07:43:59.127973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-20 07:43:59.128002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-20 07:43:59.128246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-20 07:43:59.128278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-20 07:43:59.128626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-20 07:43:59.128654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-20 07:43:59.128998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-20 07:43:59.129027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-20 07:43:59.129388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-20 07:43:59.129417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-20 07:43:59.129785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-20 07:43:59.129816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-20 07:43:59.130192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-20 07:43:59.130220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-20 07:43:59.130566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-20 07:43:59.130594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-20 07:43:59.130838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-20 07:43:59.130867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-20 07:43:59.131234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-20 07:43:59.131262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-20 07:43:59.131638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-20 07:43:59.131666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.187 qpair failed and we were unable to recover it. 00:29:41.187 [2024-11-20 07:43:59.132037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.187 [2024-11-20 07:43:59.132068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-20 07:43:59.132441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-20 07:43:59.132470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-20 07:43:59.132831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-20 07:43:59.132861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-20 07:43:59.133199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-20 07:43:59.133227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-20 07:43:59.133590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-20 07:43:59.133618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-20 07:43:59.133970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-20 07:43:59.134001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-20 07:43:59.134360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-20 07:43:59.134389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.188 [2024-11-20 07:43:59.134760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.188 [2024-11-20 07:43:59.134790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.188 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-20 07:43:59.135025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-20 07:43:59.135058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-20 07:43:59.135390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-20 07:43:59.135418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-20 07:43:59.135772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-20 07:43:59.135802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-20 07:43:59.136153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-20 07:43:59.136181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-20 07:43:59.136545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-20 07:43:59.136574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-20 07:43:59.136813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-20 07:43:59.136846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-20 07:43:59.137213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-20 07:43:59.137241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.189 [2024-11-20 07:43:59.137608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.189 [2024-11-20 07:43:59.137635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.189 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-20 07:43:59.137978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-20 07:43:59.138009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-20 07:43:59.138435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-20 07:43:59.138464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-20 07:43:59.138824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-20 07:43:59.138855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-20 07:43:59.139205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-20 07:43:59.139234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-20 07:43:59.139591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-20 07:43:59.139618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-20 07:43:59.140008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-20 07:43:59.140037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-20 07:43:59.140393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-20 07:43:59.140421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-20 07:43:59.140794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-20 07:43:59.140823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-20 07:43:59.141223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.190 [2024-11-20 07:43:59.141252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.190 qpair failed and we were unable to recover it. 00:29:41.190 [2024-11-20 07:43:59.141613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-20 07:43:59.141641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-20 07:43:59.142019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-20 07:43:59.142049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-20 07:43:59.142408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-20 07:43:59.142436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-20 07:43:59.142687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-20 07:43:59.142716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-20 07:43:59.143130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-20 07:43:59.143161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-20 07:43:59.143527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-20 07:43:59.143556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-20 07:43:59.143921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-20 07:43:59.143951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.191 [2024-11-20 07:43:59.144323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.191 [2024-11-20 07:43:59.144352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.191 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-20 07:43:59.144685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-20 07:43:59.144713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-20 07:43:59.145066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-20 07:43:59.145095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-20 07:43:59.145456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-20 07:43:59.145487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-20 07:43:59.145783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-20 07:43:59.145813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-20 07:43:59.146201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-20 07:43:59.146229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-20 07:43:59.146633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-20 07:43:59.146661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.193 [2024-11-20 07:43:59.146991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.193 [2024-11-20 07:43:59.147022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.193 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-20 07:43:59.147387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-20 07:43:59.147415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-20 07:43:59.147780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-20 07:43:59.147810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-20 07:43:59.148158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-20 07:43:59.148187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-20 07:43:59.148556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-20 07:43:59.148593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-20 07:43:59.148955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-20 07:43:59.148985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.194 [2024-11-20 07:43:59.149335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.194 [2024-11-20 07:43:59.149365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.194 qpair failed and we were unable to recover it. 00:29:41.195 [2024-11-20 07:43:59.149773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.195 [2024-11-20 07:43:59.149804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.195 qpair failed and we were unable to recover it. 00:29:41.195 [2024-11-20 07:43:59.150163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.195 [2024-11-20 07:43:59.150194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.195 qpair failed and we were unable to recover it. 00:29:41.195 [2024-11-20 07:43:59.150440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.195 [2024-11-20 07:43:59.150468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.195 qpair failed and we were unable to recover it. 00:29:41.195 [2024-11-20 07:43:59.150840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.195 [2024-11-20 07:43:59.150871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.195 qpair failed and we were unable to recover it. 00:29:41.196 [2024-11-20 07:43:59.151236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.196 [2024-11-20 07:43:59.151263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.196 qpair failed and we were unable to recover it. 00:29:41.196 [2024-11-20 07:43:59.151615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.196 [2024-11-20 07:43:59.151643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.196 qpair failed and we were unable to recover it. 00:29:41.196 [2024-11-20 07:43:59.152025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.196 [2024-11-20 07:43:59.152055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.196 qpair failed and we were unable to recover it. 00:29:41.196 [2024-11-20 07:43:59.152467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.196 [2024-11-20 07:43:59.152495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.196 qpair failed and we were unable to recover it. 00:29:41.196 [2024-11-20 07:43:59.152860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.196 [2024-11-20 07:43:59.152889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.196 qpair failed and we were unable to recover it. 00:29:41.196 [2024-11-20 07:43:59.153263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.196 [2024-11-20 07:43:59.153291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.196 qpair failed and we were unable to recover it. 00:29:41.196 [2024-11-20 07:43:59.153636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.196 [2024-11-20 07:43:59.153664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.196 qpair failed and we were unable to recover it. 00:29:41.196 [2024-11-20 07:43:59.154003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.196 [2024-11-20 07:43:59.154032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.196 qpair failed and we were unable to recover it. 00:29:41.196 [2024-11-20 07:43:59.154401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.197 [2024-11-20 07:43:59.154430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.197 qpair failed and we were unable to recover it. 00:29:41.197 [2024-11-20 07:43:59.154791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.197 [2024-11-20 07:43:59.154821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.197 qpair failed and we were unable to recover it. 00:29:41.197 [2024-11-20 07:43:59.155223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.197 [2024-11-20 07:43:59.155251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.197 qpair failed and we were unable to recover it. 00:29:41.197 [2024-11-20 07:43:59.155690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.197 [2024-11-20 07:43:59.155718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.197 qpair failed and we were unable to recover it. 00:29:41.197 [2024-11-20 07:43:59.156068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.197 [2024-11-20 07:43:59.156097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.197 qpair failed and we were unable to recover it. 00:29:41.197 [2024-11-20 07:43:59.156463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.197 [2024-11-20 07:43:59.156493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.197 qpair failed and we were unable to recover it. 00:29:41.197 [2024-11-20 07:43:59.156852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.197 [2024-11-20 07:43:59.156884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.197 qpair failed and we were unable to recover it. 00:29:41.197 [2024-11-20 07:43:59.157253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.197 [2024-11-20 07:43:59.157281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.197 qpair failed and we were unable to recover it. 00:29:41.197 [2024-11-20 07:43:59.157652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.197 [2024-11-20 07:43:59.157679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.197 qpair failed and we were unable to recover it. 00:29:41.197 [2024-11-20 07:43:59.157941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.197 [2024-11-20 07:43:59.157970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.197 qpair failed and we were unable to recover it. 00:29:41.197 [2024-11-20 07:43:59.158334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.197 [2024-11-20 07:43:59.158362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.198 qpair failed and we were unable to recover it. 00:29:41.198 [2024-11-20 07:43:59.158758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.198 [2024-11-20 07:43:59.158788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.198 qpair failed and we were unable to recover it. 00:29:41.198 [2024-11-20 07:43:59.159158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.198 [2024-11-20 07:43:59.159186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.198 qpair failed and we were unable to recover it. 00:29:41.198 [2024-11-20 07:43:59.159423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.198 [2024-11-20 07:43:59.159451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.198 qpair failed and we were unable to recover it. 00:29:41.198 [2024-11-20 07:43:59.159810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.198 [2024-11-20 07:43:59.159840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.198 qpair failed and we were unable to recover it. 00:29:41.198 [2024-11-20 07:43:59.160210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.198 [2024-11-20 07:43:59.160237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.198 qpair failed and we were unable to recover it. 00:29:41.198 [2024-11-20 07:43:59.160599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.198 [2024-11-20 07:43:59.160627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.198 qpair failed and we were unable to recover it. 00:29:41.198 [2024-11-20 07:43:59.160970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.198 [2024-11-20 07:43:59.161001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.198 qpair failed and we were unable to recover it. 00:29:41.198 [2024-11-20 07:43:59.161389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.198 [2024-11-20 07:43:59.161418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.198 qpair failed and we were unable to recover it. 00:29:41.198 [2024-11-20 07:43:59.161718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.198 [2024-11-20 07:43:59.161754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.198 qpair failed and we were unable to recover it. 00:29:41.198 [2024-11-20 07:43:59.162099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.198 [2024-11-20 07:43:59.162127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.198 qpair failed and we were unable to recover it. 00:29:41.198 [2024-11-20 07:43:59.162487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.198 [2024-11-20 07:43:59.162515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.198 qpair failed and we were unable to recover it. 00:29:41.198 [2024-11-20 07:43:59.162882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.198 [2024-11-20 07:43:59.162912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.198 qpair failed and we were unable to recover it. 00:29:41.198 [2024-11-20 07:43:59.163240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.198 [2024-11-20 07:43:59.163267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.199 qpair failed and we were unable to recover it. 00:29:41.199 [2024-11-20 07:43:59.163631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.199 [2024-11-20 07:43:59.163660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.199 qpair failed and we were unable to recover it. 00:29:41.199 [2024-11-20 07:43:59.164003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.199 [2024-11-20 07:43:59.164038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.199 qpair failed and we were unable to recover it. 00:29:41.199 [2024-11-20 07:43:59.164279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.199 [2024-11-20 07:43:59.164310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.199 qpair failed and we were unable to recover it. 00:29:41.199 [2024-11-20 07:43:59.164675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.199 [2024-11-20 07:43:59.164703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.199 qpair failed and we were unable to recover it. 00:29:41.199 [2024-11-20 07:43:59.164964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.199 [2024-11-20 07:43:59.164993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.199 qpair failed and we were unable to recover it. 00:29:41.199 [2024-11-20 07:43:59.165353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.199 [2024-11-20 07:43:59.165383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.199 qpair failed and we were unable to recover it. 00:29:41.199 [2024-11-20 07:43:59.165740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.199 [2024-11-20 07:43:59.165780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.199 qpair failed and we were unable to recover it. 00:29:41.199 [2024-11-20 07:43:59.166029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.199 [2024-11-20 07:43:59.166061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.199 qpair failed and we were unable to recover it. 00:29:41.199 [2024-11-20 07:43:59.166462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.199 [2024-11-20 07:43:59.166490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.200 qpair failed and we were unable to recover it. 00:29:41.200 [2024-11-20 07:43:59.166858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.200 [2024-11-20 07:43:59.166889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.200 qpair failed and we were unable to recover it. 00:29:41.200 [2024-11-20 07:43:59.167258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.200 [2024-11-20 07:43:59.167286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.200 qpair failed and we were unable to recover it. 00:29:41.200 [2024-11-20 07:43:59.167651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.200 [2024-11-20 07:43:59.167679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.200 qpair failed and we were unable to recover it. 00:29:41.200 [2024-11-20 07:43:59.168034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.200 [2024-11-20 07:43:59.168064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.200 qpair failed and we were unable to recover it. 00:29:41.200 [2024-11-20 07:43:59.168411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.200 [2024-11-20 07:43:59.168439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.200 qpair failed and we were unable to recover it. 00:29:41.200 [2024-11-20 07:43:59.168806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.200 [2024-11-20 07:43:59.168835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.200 qpair failed and we were unable to recover it. 00:29:41.200 [2024-11-20 07:43:59.169111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.200 [2024-11-20 07:43:59.169139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.200 qpair failed and we were unable to recover it. 00:29:41.200 [2024-11-20 07:43:59.169395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.200 [2024-11-20 07:43:59.169422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.200 qpair failed and we were unable to recover it. 00:29:41.200 [2024-11-20 07:43:59.169671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.200 [2024-11-20 07:43:59.169703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.200 qpair failed and we were unable to recover it. 00:29:41.200 [2024-11-20 07:43:59.170084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.200 [2024-11-20 07:43:59.170113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.200 qpair failed and we were unable to recover it. 00:29:41.201 [2024-11-20 07:43:59.170482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.201 [2024-11-20 07:43:59.170511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.201 qpair failed and we were unable to recover it. 00:29:41.201 [2024-11-20 07:43:59.170874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.201 [2024-11-20 07:43:59.170905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.201 qpair failed and we were unable to recover it. 00:29:41.201 [2024-11-20 07:43:59.171281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.201 [2024-11-20 07:43:59.171309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.201 qpair failed and we were unable to recover it. 00:29:41.201 [2024-11-20 07:43:59.171672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.201 [2024-11-20 07:43:59.171701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.201 qpair failed and we were unable to recover it. 00:29:41.201 [2024-11-20 07:43:59.172071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.201 [2024-11-20 07:43:59.172100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.201 qpair failed and we were unable to recover it. 00:29:41.201 [2024-11-20 07:43:59.172477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.201 [2024-11-20 07:43:59.172507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.201 qpair failed and we were unable to recover it. 00:29:41.201 [2024-11-20 07:43:59.172868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.201 [2024-11-20 07:43:59.172898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.201 qpair failed and we were unable to recover it. 00:29:41.201 [2024-11-20 07:43:59.173256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.201 [2024-11-20 07:43:59.173284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.201 qpair failed and we were unable to recover it. 00:29:41.201 [2024-11-20 07:43:59.173537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.201 [2024-11-20 07:43:59.173565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.201 qpair failed and we were unable to recover it. 00:29:41.201 [2024-11-20 07:43:59.173943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.201 [2024-11-20 07:43:59.173973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.201 qpair failed and we were unable to recover it. 00:29:41.201 [2024-11-20 07:43:59.174234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.201 [2024-11-20 07:43:59.174261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.201 qpair failed and we were unable to recover it. 00:29:41.201 [2024-11-20 07:43:59.174604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.201 [2024-11-20 07:43:59.174633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.202 qpair failed and we were unable to recover it. 00:29:41.202 [2024-11-20 07:43:59.174970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.202 [2024-11-20 07:43:59.175000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.202 qpair failed and we were unable to recover it. 00:29:41.202 [2024-11-20 07:43:59.175364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.202 [2024-11-20 07:43:59.175392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.202 qpair failed and we were unable to recover it. 00:29:41.202 [2024-11-20 07:43:59.175766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.202 [2024-11-20 07:43:59.175795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.202 qpair failed and we were unable to recover it. 00:29:41.202 [2024-11-20 07:43:59.176165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.202 [2024-11-20 07:43:59.176192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.202 qpair failed and we were unable to recover it. 00:29:41.202 [2024-11-20 07:43:59.176537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.202 [2024-11-20 07:43:59.176564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.202 qpair failed and we were unable to recover it. 00:29:41.202 [2024-11-20 07:43:59.176797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.202 [2024-11-20 07:43:59.176830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.202 qpair failed and we were unable to recover it. 00:29:41.202 [2024-11-20 07:43:59.177192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.202 [2024-11-20 07:43:59.177221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.202 qpair failed and we were unable to recover it. 00:29:41.202 [2024-11-20 07:43:59.177589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.202 [2024-11-20 07:43:59.177619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.202 qpair failed and we were unable to recover it. 00:29:41.202 [2024-11-20 07:43:59.177976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.202 [2024-11-20 07:43:59.178006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.202 qpair failed and we were unable to recover it. 00:29:41.202 [2024-11-20 07:43:59.178400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.202 [2024-11-20 07:43:59.178428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.202 qpair failed and we were unable to recover it. 00:29:41.202 [2024-11-20 07:43:59.178795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.202 [2024-11-20 07:43:59.178830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.202 qpair failed and we were unable to recover it. 00:29:41.202 [2024-11-20 07:43:59.179190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.202 [2024-11-20 07:43:59.179219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.202 qpair failed and we were unable to recover it. 00:29:41.202 [2024-11-20 07:43:59.179625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.203 [2024-11-20 07:43:59.179653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.203 qpair failed and we were unable to recover it. 00:29:41.203 [2024-11-20 07:43:59.180021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.203 [2024-11-20 07:43:59.180051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.203 qpair failed and we were unable to recover it. 00:29:41.203 [2024-11-20 07:43:59.180406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.203 [2024-11-20 07:43:59.180434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.203 qpair failed and we were unable to recover it. 00:29:41.203 [2024-11-20 07:43:59.180767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.203 [2024-11-20 07:43:59.180796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.203 qpair failed and we were unable to recover it. 00:29:41.203 [2024-11-20 07:43:59.181056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.203 [2024-11-20 07:43:59.181083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.203 qpair failed and we were unable to recover it. 00:29:41.203 [2024-11-20 07:43:59.181438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.203 [2024-11-20 07:43:59.181467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.203 qpair failed and we were unable to recover it. 00:29:41.203 [2024-11-20 07:43:59.181835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.203 [2024-11-20 07:43:59.181865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.203 qpair failed and we were unable to recover it. 00:29:41.203 [2024-11-20 07:43:59.182221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.203 [2024-11-20 07:43:59.182248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.203 qpair failed and we were unable to recover it. 00:29:41.203 [2024-11-20 07:43:59.182619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.203 [2024-11-20 07:43:59.182647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.203 qpair failed and we were unable to recover it. 00:29:41.203 [2024-11-20 07:43:59.182986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.203 [2024-11-20 07:43:59.183016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.203 qpair failed and we were unable to recover it. 00:29:41.203 [2024-11-20 07:43:59.183386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.203 [2024-11-20 07:43:59.183416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.203 qpair failed and we were unable to recover it. 00:29:41.203 [2024-11-20 07:43:59.183781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.203 [2024-11-20 07:43:59.183811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.203 qpair failed and we were unable to recover it. 00:29:41.203 [2024-11-20 07:43:59.184166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.203 [2024-11-20 07:43:59.184194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.203 qpair failed and we were unable to recover it. 00:29:41.203 [2024-11-20 07:43:59.184574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.203 [2024-11-20 07:43:59.184603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.203 qpair failed and we were unable to recover it. 00:29:41.203 [2024-11-20 07:43:59.184929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.203 [2024-11-20 07:43:59.184959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.203 qpair failed and we were unable to recover it. 00:29:41.203 [2024-11-20 07:43:59.185197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.204 [2024-11-20 07:43:59.185225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.204 qpair failed and we were unable to recover it. 00:29:41.204 [2024-11-20 07:43:59.185579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.204 [2024-11-20 07:43:59.185608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.204 qpair failed and we were unable to recover it. 00:29:41.204 [2024-11-20 07:43:59.185981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.204 [2024-11-20 07:43:59.186012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.204 qpair failed and we were unable to recover it. 00:29:41.204 [2024-11-20 07:43:59.186358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.204 [2024-11-20 07:43:59.186386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.204 qpair failed and we were unable to recover it. 00:29:41.204 [2024-11-20 07:43:59.186758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.204 [2024-11-20 07:43:59.186787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.204 qpair failed and we were unable to recover it. 00:29:41.204 [2024-11-20 07:43:59.187133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.204 [2024-11-20 07:43:59.187161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.204 qpair failed and we were unable to recover it. 00:29:41.204 [2024-11-20 07:43:59.187415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.204 [2024-11-20 07:43:59.187448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.205 qpair failed and we were unable to recover it. 00:29:41.205 [2024-11-20 07:43:59.187829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.205 [2024-11-20 07:43:59.187861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.206 qpair failed and we were unable to recover it. 00:29:41.206 [2024-11-20 07:43:59.188163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.206 [2024-11-20 07:43:59.188193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.206 qpair failed and we were unable to recover it. 00:29:41.206 [2024-11-20 07:43:59.188556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.206 [2024-11-20 07:43:59.188585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.206 qpair failed and we were unable to recover it. 00:29:41.206 [2024-11-20 07:43:59.188929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.206 [2024-11-20 07:43:59.188961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.206 qpair failed and we were unable to recover it. 00:29:41.206 [2024-11-20 07:43:59.189210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.206 [2024-11-20 07:43:59.189241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.206 qpair failed and we were unable to recover it. 00:29:41.206 [2024-11-20 07:43:59.189598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.206 [2024-11-20 07:43:59.189626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.206 qpair failed and we were unable to recover it. 00:29:41.206 [2024-11-20 07:43:59.189980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.206 [2024-11-20 07:43:59.190010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.206 qpair failed and we were unable to recover it. 00:29:41.206 [2024-11-20 07:43:59.190368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.206 [2024-11-20 07:43:59.190396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.206 qpair failed and we were unable to recover it. 00:29:41.206 [2024-11-20 07:43:59.190785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.206 [2024-11-20 07:43:59.190815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.206 qpair failed and we were unable to recover it. 00:29:41.206 [2024-11-20 07:43:59.191190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.206 [2024-11-20 07:43:59.191218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.206 qpair failed and we were unable to recover it. 00:29:41.206 [2024-11-20 07:43:59.191455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.207 [2024-11-20 07:43:59.191487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.207 qpair failed and we were unable to recover it. 00:29:41.207 [2024-11-20 07:43:59.191839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.207 [2024-11-20 07:43:59.191869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.207 qpair failed and we were unable to recover it. 00:29:41.207 [2024-11-20 07:43:59.192220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.207 [2024-11-20 07:43:59.192248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.207 qpair failed and we were unable to recover it. 00:29:41.207 [2024-11-20 07:43:59.192614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.207 [2024-11-20 07:43:59.192642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.207 qpair failed and we were unable to recover it. 00:29:41.207 [2024-11-20 07:43:59.193005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.207 [2024-11-20 07:43:59.193035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.207 qpair failed and we were unable to recover it. 00:29:41.207 [2024-11-20 07:43:59.193474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.207 [2024-11-20 07:43:59.193502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.207 qpair failed and we were unable to recover it. 00:29:41.207 [2024-11-20 07:43:59.193889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.207 [2024-11-20 07:43:59.193926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.207 qpair failed and we were unable to recover it. 00:29:41.207 [2024-11-20 07:43:59.194300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.207 [2024-11-20 07:43:59.194328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.207 qpair failed and we were unable to recover it. 00:29:41.207 [2024-11-20 07:43:59.194692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.208 [2024-11-20 07:43:59.194720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.208 qpair failed and we were unable to recover it. 00:29:41.208 [2024-11-20 07:43:59.195091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.208 [2024-11-20 07:43:59.195121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.208 qpair failed and we were unable to recover it. 00:29:41.208 [2024-11-20 07:43:59.195480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.208 [2024-11-20 07:43:59.195508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.208 qpair failed and we were unable to recover it. 00:29:41.208 [2024-11-20 07:43:59.195860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.208 [2024-11-20 07:43:59.195890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.208 qpair failed and we were unable to recover it. 00:29:41.208 [2024-11-20 07:43:59.196084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.208 [2024-11-20 07:43:59.196113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.208 qpair failed and we were unable to recover it. 00:29:41.208 [2024-11-20 07:43:59.196509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.208 [2024-11-20 07:43:59.196537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.208 qpair failed and we were unable to recover it. 00:29:41.208 [2024-11-20 07:43:59.196817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.209 [2024-11-20 07:43:59.196847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.209 qpair failed and we were unable to recover it. 00:29:41.209 [2024-11-20 07:43:59.197255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.209 [2024-11-20 07:43:59.197283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.209 qpair failed and we were unable to recover it. 00:29:41.209 [2024-11-20 07:43:59.197652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.209 [2024-11-20 07:43:59.197680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.209 qpair failed and we were unable to recover it. 00:29:41.209 [2024-11-20 07:43:59.198043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.209 [2024-11-20 07:43:59.198073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.209 qpair failed and we were unable to recover it. 00:29:41.209 [2024-11-20 07:43:59.198363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.209 [2024-11-20 07:43:59.198391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.209 qpair failed and we were unable to recover it. 00:29:41.209 [2024-11-20 07:43:59.198833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.209 [2024-11-20 07:43:59.198862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.209 qpair failed and we were unable to recover it. 00:29:41.209 [2024-11-20 07:43:59.199225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.209 [2024-11-20 07:43:59.199254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.209 qpair failed and we were unable to recover it. 00:29:41.209 [2024-11-20 07:43:59.199620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.209 [2024-11-20 07:43:59.199649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.209 qpair failed and we were unable to recover it. 00:29:41.209 [2024-11-20 07:43:59.199942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.210 [2024-11-20 07:43:59.199972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.210 qpair failed and we were unable to recover it. 00:29:41.210 [2024-11-20 07:43:59.200415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.210 [2024-11-20 07:43:59.200443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.210 qpair failed and we were unable to recover it. 00:29:41.210 [2024-11-20 07:43:59.200702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.210 [2024-11-20 07:43:59.200732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.210 qpair failed and we were unable to recover it. 00:29:41.210 [2024-11-20 07:43:59.201112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.210 [2024-11-20 07:43:59.201142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.210 qpair failed and we were unable to recover it. 00:29:41.210 [2024-11-20 07:43:59.201505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.210 [2024-11-20 07:43:59.201533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.210 qpair failed and we were unable to recover it. 00:29:41.210 [2024-11-20 07:43:59.201904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.210 [2024-11-20 07:43:59.201934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.210 qpair failed and we were unable to recover it. 00:29:41.210 [2024-11-20 07:43:59.202304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.210 [2024-11-20 07:43:59.202332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.210 qpair failed and we were unable to recover it. 00:29:41.210 [2024-11-20 07:43:59.202692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.210 [2024-11-20 07:43:59.202720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.210 qpair failed and we were unable to recover it. 00:29:41.210 [2024-11-20 07:43:59.203137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.210 [2024-11-20 07:43:59.203167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.210 qpair failed and we were unable to recover it. 00:29:41.210 [2024-11-20 07:43:59.203529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.210 [2024-11-20 07:43:59.203557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.210 qpair failed and we were unable to recover it. 00:29:41.210 [2024-11-20 07:43:59.203925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.210 [2024-11-20 07:43:59.203955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.210 qpair failed and we were unable to recover it. 00:29:41.210 [2024-11-20 07:43:59.204316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.210 [2024-11-20 07:43:59.204345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.210 qpair failed and we were unable to recover it. 00:29:41.210 [2024-11-20 07:43:59.204715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.210 [2024-11-20 07:43:59.204744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.210 qpair failed and we were unable to recover it. 00:29:41.211 [2024-11-20 07:43:59.205123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.211 [2024-11-20 07:43:59.205152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.211 qpair failed and we were unable to recover it. 00:29:41.211 [2024-11-20 07:43:59.205533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.211 [2024-11-20 07:43:59.205562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.211 qpair failed and we were unable to recover it. 00:29:41.211 [2024-11-20 07:43:59.205920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.211 [2024-11-20 07:43:59.205951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.211 qpair failed and we were unable to recover it. 00:29:41.211 [2024-11-20 07:43:59.206323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.211 [2024-11-20 07:43:59.206352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.211 qpair failed and we were unable to recover it. 00:29:41.211 [2024-11-20 07:43:59.206714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.211 [2024-11-20 07:43:59.206742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.211 qpair failed and we were unable to recover it. 00:29:41.211 [2024-11-20 07:43:59.207114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.211 [2024-11-20 07:43:59.207145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.211 qpair failed and we were unable to recover it. 00:29:41.211 [2024-11-20 07:43:59.207432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.211 [2024-11-20 07:43:59.207460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.211 qpair failed and we were unable to recover it. 00:29:41.211 [2024-11-20 07:43:59.207818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.211 [2024-11-20 07:43:59.207848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.211 qpair failed and we were unable to recover it. 00:29:41.211 [2024-11-20 07:43:59.208116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.211 [2024-11-20 07:43:59.208148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.211 qpair failed and we were unable to recover it. 00:29:41.211 [2024-11-20 07:43:59.208395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.211 [2024-11-20 07:43:59.208423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.211 qpair failed and we were unable to recover it. 00:29:41.211 [2024-11-20 07:43:59.208808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.211 [2024-11-20 07:43:59.208837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.212 qpair failed and we were unable to recover it. 00:29:41.212 [2024-11-20 07:43:59.209205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.212 [2024-11-20 07:43:59.209239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.212 qpair failed and we were unable to recover it. 00:29:41.212 [2024-11-20 07:43:59.209601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.212 [2024-11-20 07:43:59.209630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.212 qpair failed and we were unable to recover it. 00:29:41.212 [2024-11-20 07:43:59.209979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.212 [2024-11-20 07:43:59.210010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.212 qpair failed and we were unable to recover it. 00:29:41.212 [2024-11-20 07:43:59.210369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.212 [2024-11-20 07:43:59.210398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.212 qpair failed and we were unable to recover it. 00:29:41.212 [2024-11-20 07:43:59.210768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.212 [2024-11-20 07:43:59.210798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.212 qpair failed and we were unable to recover it. 00:29:41.212 [2024-11-20 07:43:59.211198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.212 [2024-11-20 07:43:59.211226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.212 qpair failed and we were unable to recover it. 00:29:41.212 [2024-11-20 07:43:59.211600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.212 [2024-11-20 07:43:59.211628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.213 qpair failed and we were unable to recover it. 00:29:41.213 [2024-11-20 07:43:59.211977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.213 [2024-11-20 07:43:59.212007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.213 qpair failed and we were unable to recover it. 00:29:41.213 [2024-11-20 07:43:59.212370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.213 [2024-11-20 07:43:59.212399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.213 qpair failed and we were unable to recover it. 00:29:41.213 [2024-11-20 07:43:59.212654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.213 [2024-11-20 07:43:59.212686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.213 qpair failed and we were unable to recover it. 00:29:41.213 [2024-11-20 07:43:59.212930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.213 [2024-11-20 07:43:59.212963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.213 qpair failed and we were unable to recover it. 00:29:41.213 [2024-11-20 07:43:59.213310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.213 [2024-11-20 07:43:59.213338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.213 qpair failed and we were unable to recover it. 00:29:41.213 [2024-11-20 07:43:59.213692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.213 [2024-11-20 07:43:59.213719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.213 qpair failed and we were unable to recover it. 00:29:41.213 [2024-11-20 07:43:59.213984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.213 [2024-11-20 07:43:59.214013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.213 qpair failed and we were unable to recover it. 00:29:41.213 [2024-11-20 07:43:59.214394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.213 [2024-11-20 07:43:59.214423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.213 qpair failed and we were unable to recover it. 00:29:41.213 [2024-11-20 07:43:59.214780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.213 [2024-11-20 07:43:59.214811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.214 qpair failed and we were unable to recover it. 00:29:41.214 [2024-11-20 07:43:59.215200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.214 [2024-11-20 07:43:59.215228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.214 qpair failed and we were unable to recover it. 00:29:41.214 [2024-11-20 07:43:59.215569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.214 [2024-11-20 07:43:59.215596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.214 qpair failed and we were unable to recover it. 00:29:41.214 [2024-11-20 07:43:59.215883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.214 [2024-11-20 07:43:59.215912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.214 qpair failed and we were unable to recover it. 00:29:41.214 [2024-11-20 07:43:59.216264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.214 [2024-11-20 07:43:59.216292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.214 qpair failed and we were unable to recover it. 00:29:41.214 [2024-11-20 07:43:59.216407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.214 [2024-11-20 07:43:59.216438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.214 qpair failed and we were unable to recover it. 00:29:41.214 [2024-11-20 07:43:59.216832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.214 [2024-11-20 07:43:59.216863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.214 qpair failed and we were unable to recover it. 00:29:41.214 [2024-11-20 07:43:59.217244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.214 [2024-11-20 07:43:59.217271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.214 qpair failed and we were unable to recover it. 00:29:41.215 [2024-11-20 07:43:59.217639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.215 [2024-11-20 07:43:59.217667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.215 qpair failed and we were unable to recover it. 00:29:41.215 [2024-11-20 07:43:59.218035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.215 [2024-11-20 07:43:59.218065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.215 qpair failed and we were unable to recover it. 00:29:41.215 [2024-11-20 07:43:59.218429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.215 [2024-11-20 07:43:59.218457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.215 qpair failed and we were unable to recover it. 00:29:41.215 [2024-11-20 07:43:59.218821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.215 [2024-11-20 07:43:59.218852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.215 qpair failed and we were unable to recover it. 00:29:41.215 [2024-11-20 07:43:59.219215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.215 [2024-11-20 07:43:59.219245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.215 qpair failed and we were unable to recover it. 00:29:41.215 [2024-11-20 07:43:59.219621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.215 [2024-11-20 07:43:59.219648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.215 qpair failed and we were unable to recover it. 00:29:41.215 [2024-11-20 07:43:59.219909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.215 [2024-11-20 07:43:59.219940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.215 qpair failed and we were unable to recover it. 00:29:41.215 [2024-11-20 07:43:59.220296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.215 [2024-11-20 07:43:59.220324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.215 qpair failed and we were unable to recover it. 00:29:41.215 [2024-11-20 07:43:59.220701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.215 [2024-11-20 07:43:59.220730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.215 qpair failed and we were unable to recover it. 00:29:41.215 [2024-11-20 07:43:59.220982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.216 [2024-11-20 07:43:59.221015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.216 qpair failed and we were unable to recover it. 00:29:41.216 [2024-11-20 07:43:59.221370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.216 [2024-11-20 07:43:59.221399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.216 qpair failed and we were unable to recover it. 00:29:41.216 [2024-11-20 07:43:59.221760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.216 [2024-11-20 07:43:59.221789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.216 qpair failed and we were unable to recover it. 00:29:41.216 [2024-11-20 07:43:59.222149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.216 [2024-11-20 07:43:59.222177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.216 qpair failed and we were unable to recover it. 00:29:41.216 [2024-11-20 07:43:59.222561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.216 [2024-11-20 07:43:59.222589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.216 qpair failed and we were unable to recover it. 00:29:41.216 [2024-11-20 07:43:59.222951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.216 [2024-11-20 07:43:59.222981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.216 qpair failed and we were unable to recover it. 00:29:41.216 [2024-11-20 07:43:59.223342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.216 [2024-11-20 07:43:59.223372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.216 qpair failed and we were unable to recover it. 00:29:41.216 [2024-11-20 07:43:59.223586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.216 [2024-11-20 07:43:59.223617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.216 qpair failed and we were unable to recover it. 00:29:41.216 [2024-11-20 07:43:59.224015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.216 [2024-11-20 07:43:59.224051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.216 qpair failed and we were unable to recover it. 00:29:41.216 [2024-11-20 07:43:59.224388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.216 [2024-11-20 07:43:59.224417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.216 qpair failed and we were unable to recover it. 00:29:41.216 [2024-11-20 07:43:59.224788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.216 [2024-11-20 07:43:59.224817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.216 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.225184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.225211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.225592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.225622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.226004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.226034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.226396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.226424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.226790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.226821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.227212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.227239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.227602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.227632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.227996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.228026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.228339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.228367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.228700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.228727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.229090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.229119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.229483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.229511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.229865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.229895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.230248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.230276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.230647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.230675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.231038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.231069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.231430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.231458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.231824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.231853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.232218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.232246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.232621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.232649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.233031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.233061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.233411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.233439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.233696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.233728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.233971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.217 [2024-11-20 07:43:59.234000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.217 qpair failed and we were unable to recover it. 00:29:41.217 [2024-11-20 07:43:59.234434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.218 [2024-11-20 07:43:59.234464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.218 qpair failed and we were unable to recover it. 00:29:41.218 [2024-11-20 07:43:59.234822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.218 [2024-11-20 07:43:59.234853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.218 qpair failed and we were unable to recover it. 00:29:41.218 [2024-11-20 07:43:59.235227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.218 [2024-11-20 07:43:59.235254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.218 qpair failed and we were unable to recover it. 00:29:41.218 [2024-11-20 07:43:59.235629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.218 [2024-11-20 07:43:59.235656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.218 qpair failed and we were unable to recover it. 00:29:41.218 [2024-11-20 07:43:59.235989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.218 [2024-11-20 07:43:59.236019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.218 qpair failed and we were unable to recover it. 00:29:41.218 [2024-11-20 07:43:59.236379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.218 [2024-11-20 07:43:59.236406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.218 qpair failed and we were unable to recover it. 00:29:41.218 [2024-11-20 07:43:59.236779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.236809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.237062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.237090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.237546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.237573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.237909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.237938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.238316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.238344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.238707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.238736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.239122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.239151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.239518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.239552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.239909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.239939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.240323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.240351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.240718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.240757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.241105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.241134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.241505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.241536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.241905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.241934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.242305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.242332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.242708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.242736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.243158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.243187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.243547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.243576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.243922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.243952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.244315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.244344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.244785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.244815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.245172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.245201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.245577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.245604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.246018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.246048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.246286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.246318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.246661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.246689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.246940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.246973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.247317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.247345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.247563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.247594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.247979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.248010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.248356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.248385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.248724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.248763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.249120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.249148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.249506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.249534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.249889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.249919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.250291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.250319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.250685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.250714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.251105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.251135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.251490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.219 [2024-11-20 07:43:59.251518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.219 qpair failed and we were unable to recover it. 00:29:41.219 [2024-11-20 07:43:59.251891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.251922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.252294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.252321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.252690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.252717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.252888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.252922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.253287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.253316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.253665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.253693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.254039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.254069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.254435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.254463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.254833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.254868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.255225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.255255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.255710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.255739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.256123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.256151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.256409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.256437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.256794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.256824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.257194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.257222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.257479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.257507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.257882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.257912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.258250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.258278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.258649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.258677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.259050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.259079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.259335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.259362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.259717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.259796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.260047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.260077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.260447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.260477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.260820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.260851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.261214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.261242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.261604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.261632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.261981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.262011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.262384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.262412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.262768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.262798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.264668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.264730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.265181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.265217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.265593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.265621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.266015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.266045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.266489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.266517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.266856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.266886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.267231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.267260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.267628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.267656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.268006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.268037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.268397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.268425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.268785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.268815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.269169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.269198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.269569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.269598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.269975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.270007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.220 qpair failed and we were unable to recover it. 00:29:41.220 [2024-11-20 07:43:59.270372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.220 [2024-11-20 07:43:59.270401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.270774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.270803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.271055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.271088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.271441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.271471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.271700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.271738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.272116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.272146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.272505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.272533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.272898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.272928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.273289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.273317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.273725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.273764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.274178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.274207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.274592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.274621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.275013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.275044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.275395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.275423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.275781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.275811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.276142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.276171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.276533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.276563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.276923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.276954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.277322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.277350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.277721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.277761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.278004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.278034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.278407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.278436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.278817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.278847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.279190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.279221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.279583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.279611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.280012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.280042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.280409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.280438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.280812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.280844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.281228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.281256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.281519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.281548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.281920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.281950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.282198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.282230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.282544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.282573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.282950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.282980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.221 [2024-11-20 07:43:59.283346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.221 [2024-11-20 07:43:59.283374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.221 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.283711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.283740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.284129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.284158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.284518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.284546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.284949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.284980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.285280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.285308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.285668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.285698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.286062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.286091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.286467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.286495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.286864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.286894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.287139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.287177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.287575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.287604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.287981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.288011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.288371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.288400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.288779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.288810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.289206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.289235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.289582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.289610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.290020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.290049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.290413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.290442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.290808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.290839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.291242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.291270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.291635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.291662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.291902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.291933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.292293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.292321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.292684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.292714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.293063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.293093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.293432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.293460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.293821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.293850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.294227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.294256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.294616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.294645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.294987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.295017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.295354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.295382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.295644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.295672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.296050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.296080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.296442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.296472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.296731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.296781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.297073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.297102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.297462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.297493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.297854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.297884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.298245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.298273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.298635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.298663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.298994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.299024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.299381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.299409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.299776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.299808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.300179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.300208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.300570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.222 [2024-11-20 07:43:59.300598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.222 qpair failed and we were unable to recover it. 00:29:41.222 [2024-11-20 07:43:59.300974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.301004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.301446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.301474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.301771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.301802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.302185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.302214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.302465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.302500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.302846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.302875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.303253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.303281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.303653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.303682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.304095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.304125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.304403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.304431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.304802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.304832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.305246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.305273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.305566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.305594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.305971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.306001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.306365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.306394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.306653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.306680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.307049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.307079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.307443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.307471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.307701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.307732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.308162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.308191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.308552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.308581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.308953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.308984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.309361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.309390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.309741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.309782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.310120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.310148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.310482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.310510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.310874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.310904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.311276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.311304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.311674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.311702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.312069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.312099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.312465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.312493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.312768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.312799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.313054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.313085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.313498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.313527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.313862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.313892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.314259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.314286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.314648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.314677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.314936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.314965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.315336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.315364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.223 [2024-11-20 07:43:59.315527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.223 [2024-11-20 07:43:59.315560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.223 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.315976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.316006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.316266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.316293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.316687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.316715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.317101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.317130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.317503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.317537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.317910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.317940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.318198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.318225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.318584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.318612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.318978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.319008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.319439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.319467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.319810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.319840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.320220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.320249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.320623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.320652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.321000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.321030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.321409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.321437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.321805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.321834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.322208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.322237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.322602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.322630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.322981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.323012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.323323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.323351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.323736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.323776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.324155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.324184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.324544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.324572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.324941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.324971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.325333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.325362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.325736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.325777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.326215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.326243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.326602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.326630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.326884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.224 [2024-11-20 07:43:59.326914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.224 qpair failed and we were unable to recover it. 00:29:41.224 [2024-11-20 07:43:59.327274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.327304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.327662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.327691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.328076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.328106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.328457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.328486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.328842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.328872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.329243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.329271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.329632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.329661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.330023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.330053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.330427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.330457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.330817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.330847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.331281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.331309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.331641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.331671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.331895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.331927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.332269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.332297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.332540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.332572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.332932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.332967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.333340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.333369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.333723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.333763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.334120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.334149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.334514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.334542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.334912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.334942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.335309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.335337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.335697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.335726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.336091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.336120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.336498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.336527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.336903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.336932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.337151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.337180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.337547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.337575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.337975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.338005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.338248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.338280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.338623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.338653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.338960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.338992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.339351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.339378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.339770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.339800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.340040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.340071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.340341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.340370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.340728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.340770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.341121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.341151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.341507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.341535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.341795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.341824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.342186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.342215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.342573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.342601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.342984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.343020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.343348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.343377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.343775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.343806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.344162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.344190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.344553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.344581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.344954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.344983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.345354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.345382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.345741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.345783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.346157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.346185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.346553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.346582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.346961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.346990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.225 qpair failed and we were unable to recover it. 00:29:41.225 [2024-11-20 07:43:59.347348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.225 [2024-11-20 07:43:59.347376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-20 07:43:59.347641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-20 07:43:59.347670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-20 07:43:59.347925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-20 07:43:59.347958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-20 07:43:59.348341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-20 07:43:59.348370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-20 07:43:59.348755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-20 07:43:59.348785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-20 07:43:59.349128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-20 07:43:59.349156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-20 07:43:59.349517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-20 07:43:59.349546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-20 07:43:59.349933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-20 07:43:59.349963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-20 07:43:59.350327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-20 07:43:59.350356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-20 07:43:59.350710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-20 07:43:59.350738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-20 07:43:59.351096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-20 07:43:59.351124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-20 07:43:59.351486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-20 07:43:59.351514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-20 07:43:59.351706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-20 07:43:59.351733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-20 07:43:59.352115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-20 07:43:59.352144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.226 qpair failed and we were unable to recover it. 00:29:41.226 [2024-11-20 07:43:59.352400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.226 [2024-11-20 07:43:59.352432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.504 qpair failed and we were unable to recover it. 00:29:41.504 [2024-11-20 07:43:59.352807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.504 [2024-11-20 07:43:59.352841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.504 qpair failed and we were unable to recover it. 00:29:41.504 [2024-11-20 07:43:59.353215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.504 [2024-11-20 07:43:59.353244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.504 qpair failed and we were unable to recover it. 00:29:41.504 [2024-11-20 07:43:59.353580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.504 [2024-11-20 07:43:59.353607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.504 qpair failed and we were unable to recover it. 00:29:41.504 [2024-11-20 07:43:59.353976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.504 [2024-11-20 07:43:59.354005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.504 qpair failed and we were unable to recover it. 00:29:41.504 [2024-11-20 07:43:59.354356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.504 [2024-11-20 07:43:59.354384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.504 qpair failed and we were unable to recover it. 00:29:41.504 [2024-11-20 07:43:59.354765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.504 [2024-11-20 07:43:59.354796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.504 qpair failed and we were unable to recover it. 00:29:41.504 [2024-11-20 07:43:59.355140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.504 [2024-11-20 07:43:59.355169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.504 qpair failed and we were unable to recover it. 00:29:41.504 [2024-11-20 07:43:59.355537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.504 [2024-11-20 07:43:59.355566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.504 qpair failed and we were unable to recover it. 00:29:41.504 [2024-11-20 07:43:59.355931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.504 [2024-11-20 07:43:59.355960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.504 qpair failed and we were unable to recover it. 00:29:41.504 [2024-11-20 07:43:59.356331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.504 [2024-11-20 07:43:59.356359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.504 qpair failed and we were unable to recover it. 00:29:41.504 [2024-11-20 07:43:59.356588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.504 [2024-11-20 07:43:59.356620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.504 qpair failed and we were unable to recover it. 00:29:41.504 [2024-11-20 07:43:59.357009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.504 [2024-11-20 07:43:59.357039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.504 qpair failed and we were unable to recover it. 00:29:41.504 [2024-11-20 07:43:59.357404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.504 [2024-11-20 07:43:59.357432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.504 qpair failed and we were unable to recover it. 00:29:41.504 [2024-11-20 07:43:59.357705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.504 [2024-11-20 07:43:59.357733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.504 qpair failed and we were unable to recover it. 00:29:41.504 [2024-11-20 07:43:59.358120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.504 [2024-11-20 07:43:59.358155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.504 qpair failed and we were unable to recover it. 00:29:41.504 [2024-11-20 07:43:59.358514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.504 [2024-11-20 07:43:59.358543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.504 qpair failed and we were unable to recover it. 00:29:41.504 [2024-11-20 07:43:59.358906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.358935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.359298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.359328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.359688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.359716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.360154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.360183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.360540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.360568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.360932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.360963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.361336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.361365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.361735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.361778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.362130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.362160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.362511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.362539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.362934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.362963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.363398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.363425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.363647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.363679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.364009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.364039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.364391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.364420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.364790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.364821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.365162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.365190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.365452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.365479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.365770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.365799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.366169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.366197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.366572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.366601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.366898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.366927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.367273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.367301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.367659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.367688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.368031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.368061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.368418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.368446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.368811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.368843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.369230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.369258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.369618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.369648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.370002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.370031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.370397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.370424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.370787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.370817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.371052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.371083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.371444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.371473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.371835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.371865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.372244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.505 [2024-11-20 07:43:59.372271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.505 qpair failed and we were unable to recover it. 00:29:41.505 [2024-11-20 07:43:59.372569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.372597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.372835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.372865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.373225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.373259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.373509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.373538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.373913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.373943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.374204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.374232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.374615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.374644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.375009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.375038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.375331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.375358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.375720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.375759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.375989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.376022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.376413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.376441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.376814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.376844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.377207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.377235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.377605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.377634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.377976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.378006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.378375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.378404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.378766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.378795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.379155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.379183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.379446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.379474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.379770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.379799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.380190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.380218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.380585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.380613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.380979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.381009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.381240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.381268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.381617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.381644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.381993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.382022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.382387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.382414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.382784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.382815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.383226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.383255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.383620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.383648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.384080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.384109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.506 qpair failed and we were unable to recover it. 00:29:41.506 [2024-11-20 07:43:59.384437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.506 [2024-11-20 07:43:59.384465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.384828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.384859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.385229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.385258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.385628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.385656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.385921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.385950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.386313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.386341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.386705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.386734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.387099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.387129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.387495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.387523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.387867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.387895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.388246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.388279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.388639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.388668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.389005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.389034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.389392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.389421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.389785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.389816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.390181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.390210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3589797 Killed "${NVMF_APP[@]}" "$@" 00:29:41.507 [2024-11-20 07:43:59.390595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.390623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.391013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.391042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 07:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:41.507 [2024-11-20 07:43:59.391414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.391442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 07:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:41.507 [2024-11-20 07:43:59.391823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.391853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 07:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:41.507 [2024-11-20 07:43:59.392213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.392242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 07:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:41.507 07:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:41.507 [2024-11-20 07:43:59.392586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.392626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.392981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.393011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.393311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.393339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.393720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.393761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.394200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.394228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.507 [2024-11-20 07:43:59.394585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.507 [2024-11-20 07:43:59.394613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.507 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.394874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.394904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.395230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.395259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.395611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.395640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.395943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.395974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.396185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.396217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.396564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.396593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.396963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.396994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.397365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.397400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.397728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.397783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.398077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.398105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.398367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.398396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.398639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.398669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.399022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.399052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.399397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.399424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.399681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.399709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.399961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.399990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.400246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.400279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.400613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.400641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 07:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3590762 00:29:41.508 [2024-11-20 07:43:59.401009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.401047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 07:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3590762 00:29:41.508 [2024-11-20 07:43:59.401412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.401441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 07:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3590762 ']' 00:29:41.508 07:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:41.508 [2024-11-20 07:43:59.401808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.401837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 07:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.508 07:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:41.508 [2024-11-20 07:43:59.402212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.402241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 07:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.508 [2024-11-20 07:43:59.402593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.402622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 07:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 07:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:41.508 [2024-11-20 07:43:59.403039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.403069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.403457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.403486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.403870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.403901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.404254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.404283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.404630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.404659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.404930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.404964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.405318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.508 [2024-11-20 07:43:59.405354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.508 qpair failed and we were unable to recover it. 00:29:41.508 [2024-11-20 07:43:59.405611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.405640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.406048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.406080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.406385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.406414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.406762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.406793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.407189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.407218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.407580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.407609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.407950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.407982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.408358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.408389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.408758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.408790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.409086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.409116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.409346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.409378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.409726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.409771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.410062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.410092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.410490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.410522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.410872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.410904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.411256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.411287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.411623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.411653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.411895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.411929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.412191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.412223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.412475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.412508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.412870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.412902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.413269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.413298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.413579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.413608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.413868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.413898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.414269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.414298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.414659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.414689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.415122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.415154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.415514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.415543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.415819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.415849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.416223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.416251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.509 qpair failed and we were unable to recover it. 00:29:41.509 [2024-11-20 07:43:59.416478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.509 [2024-11-20 07:43:59.416507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.416784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.416817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.417108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.417136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.417483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.417513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.417872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.417902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.418286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.418313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.418679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.418707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.419168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.419197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.419568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.419598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.419974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.420011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.420277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.420309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.420608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.420637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.421043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.421073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.421326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.421354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.421681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.421711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.422101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.422131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.422505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.422533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.422769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.422799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.423255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.423284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.423638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.423667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.424016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.424047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.424419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.424448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.424810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.424840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.425225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.425254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.425628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.425656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.426012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.426041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.426408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.426438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.426795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.426825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.427089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.427117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.427471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.427499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.427916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.427946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.510 qpair failed and we were unable to recover it. 00:29:41.510 [2024-11-20 07:43:59.428331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.510 [2024-11-20 07:43:59.428360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.428770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.428801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.429155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.429185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.429518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.429546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.429829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.429862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.430123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.430156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.430411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.430441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.430803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.430835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.431216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.431245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.431623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.431652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.431812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.431842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.432202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.432231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.432619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.432648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.433101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.433135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.433507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.433537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.433916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.433945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.434325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.434355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.434692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.434721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.435075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.435110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.435382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.435411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.435648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.435678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.435935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.435969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.436348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.436376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.436765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.436797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.437164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.437192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.437568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.437597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.437865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.437895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.438296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.438324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.438716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.438743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.439142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.439170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.439441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.439469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.511 [2024-11-20 07:43:59.439708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.511 [2024-11-20 07:43:59.439736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.511 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.440175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.440206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.440327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.440355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.440716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.440744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.441138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.441166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.441553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.441581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.441733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.441775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.442163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.442192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.442562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.442591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.443017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.443047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.443319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.443347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.443708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.443737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.444170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.444199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.444565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.444596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.444868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.444902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.445291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.445320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.445698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.445725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.445975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.446008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.446383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.446412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.446786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.446817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.447072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.447103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.447555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.447585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.447724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.447766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.448173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.448202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.448589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.448619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.448979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.449010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.449390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.449421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.449810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.449847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.450106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.450134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.450526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.450554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.512 [2024-11-20 07:43:59.450931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.512 [2024-11-20 07:43:59.450962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.512 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.451356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.451391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.451775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.451807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.452165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.452193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.452585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.452614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.452982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.453012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.453326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.453355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.453735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.453782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.454135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.454164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.454513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.454541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.454886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.454921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.455302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.455331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.455705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.455734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.456122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.456151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.456538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.456567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.456852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.456881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.457128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.457157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.457540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.457568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.457808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.457840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.458231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.458260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.458644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.458673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.458943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.458973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.459348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.459376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.459605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.459637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.460018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.460049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.460218] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:29:41.513 [2024-11-20 07:43:59.460281] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.513 [2024-11-20 07:43:59.460430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.460461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.460837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.460867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.461252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.461281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.461534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.513 [2024-11-20 07:43:59.461565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.513 qpair failed and we were unable to recover it. 00:29:41.513 [2024-11-20 07:43:59.461955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.461985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.462256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.462285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.462660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.462688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.462936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.462969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.463350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.463380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.463769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.463800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.464172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.464202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.464576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.464606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.464922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.464954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.465352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.465381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.465757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.465789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.465916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.465947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.466306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.466335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.466726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.466768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.467007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.467040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.467506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.467536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.467902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.467935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.468383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.468412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.468795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.468826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.469177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.469204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.469584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.469619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.469872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.469902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.470121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.470149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.470507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.470534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.470913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.470943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.471324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.471353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.471709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.471737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.472100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.472129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.472470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.472499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.472719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.472761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.473126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.473154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.473545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.473575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.473957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.473989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.514 qpair failed and we were unable to recover it. 00:29:41.514 [2024-11-20 07:43:59.474243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.514 [2024-11-20 07:43:59.474272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.474644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.474674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.475060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.475090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.475323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.475351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.475726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.475765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.476097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.476126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.476481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.476508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.476872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.476903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.477292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.477320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.477568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.477596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.477989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.478019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.478424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.478453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.478606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.478634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.479011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.479042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.479327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.479356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.479656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.479685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.479887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.479918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.480272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.480301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.480675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.480704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.481104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.481134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.481530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.481559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.481984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.482014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.482390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.482419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.482812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.482841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.483222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.483250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.483625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.483654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.484025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.484054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.484443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.484479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.515 [2024-11-20 07:43:59.484737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.515 [2024-11-20 07:43:59.484786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.515 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.485190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.485219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.485598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.485627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.485907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.485937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.486175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.486203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.486614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.486643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.487042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.487071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.487449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.487478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.487850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.487879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.488103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.488132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.488518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.488547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.488933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.488963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.489338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.489365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.489735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.489779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.489930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.489957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.490340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.490368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.490602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.490629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.491018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.491048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.491448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.491477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.491839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.491869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.492339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.492367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.492768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.492799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.493191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.493220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.493489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.493518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.493872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.493903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.494285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.494313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.494595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.494624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.495007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.495038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.495290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.495320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.495732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.495773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.516 [2024-11-20 07:43:59.496022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.516 [2024-11-20 07:43:59.496051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.516 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.496423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.496451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.496844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.496874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.497292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.497320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.497674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.497703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.498077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.498107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.498491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.498519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.498918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.498949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.499325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.499353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.499719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.499761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.500133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.500163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.500343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.500378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.500790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.500821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.501168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.501196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.501620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.501650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.502073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.502105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.502418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.502448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.502806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.502839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.503243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.503271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.503645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.503674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.504080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.504111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.504484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.504514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.504787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.504817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.505064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.505096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.505526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.505554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.505813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.505844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.506189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.506217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.506602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.506631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.506886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.506921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.507333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.507362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.507703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.517 [2024-11-20 07:43:59.507732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.517 qpair failed and we were unable to recover it. 00:29:41.517 [2024-11-20 07:43:59.508132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.508162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.508511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.508541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.508920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.508951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.509354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.509384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.509770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.509800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.510166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.510201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.510458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.510489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.510876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.510906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.511034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.511061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.511533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.511562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.511844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.511874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.512100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.512130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.512500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.512528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.512895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.512925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.513303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.513331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.513710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.513738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.514111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.514139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.514482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.514510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.514876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.514906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.515291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.515320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.515696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.515725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.516098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.516128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.516513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.516541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.516865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.516894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.517289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.517316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.517651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.517679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.518073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.518103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.518472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.518500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.518862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.518 [2024-11-20 07:43:59.518893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.518 qpair failed and we were unable to recover it. 00:29:41.518 [2024-11-20 07:43:59.519274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.519302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.519691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.519719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.520098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.520129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.520508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.520538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.520916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.520946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.521317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.521345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.521714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.521742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.522119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.522148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.522527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.522556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.522923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.522954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.523336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.523365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.523763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.523794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.524188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.524217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.524485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.524513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.524863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.524894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.525250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.525279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.525658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.525692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.526062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.526092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.526526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.526554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.526947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.526977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.527344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.527374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.527744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.527797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.528156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.528184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.528547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.528575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.528984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.529014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.529394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.529423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.529798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.529828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.530091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.530120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.530488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.530520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.530883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.530912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.531289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.531317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.531574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.519 [2024-11-20 07:43:59.531602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.519 qpair failed and we were unable to recover it. 00:29:41.519 [2024-11-20 07:43:59.531939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.531970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.532342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.532371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.532759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.532789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.533197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.533226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.533630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.533658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.534021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.534052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.534302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.534334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.534717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.534761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.535134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.535163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.535428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.535456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.535845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.535876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.536338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.536366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.536711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.536740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.537094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.537124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.537505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.537533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.537778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.537808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.538179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.538208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.538582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.538611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.539017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.539048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.539424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.539453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.539826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.539856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.540117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.540145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.540404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.540434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.540674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.540702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.541082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.541119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.541269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.541297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.541693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.541721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.542175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.542205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.520 qpair failed and we were unable to recover it. 00:29:41.520 [2024-11-20 07:43:59.542546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.520 [2024-11-20 07:43:59.542575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.542973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.543002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.543226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.543255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.543625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.543653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.544039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.544069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.544503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.544532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.544786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.544816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.545102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.545131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.545460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.545489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.545878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.545907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.546296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.546325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.546644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.546672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.547025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.547055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.547218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.547246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.547670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.547704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.548150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.548181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.548564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.548595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.548814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.548846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.549229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.549257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.549499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.549531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.549904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.549935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.550311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.550339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.550715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.550743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.551138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.551167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.551547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.551576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.551942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.551973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.552348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.552378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.552768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.552800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.553095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.553124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.553495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.553523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.553773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.553807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.521 qpair failed and we were unable to recover it. 00:29:41.521 [2024-11-20 07:43:59.554186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.521 [2024-11-20 07:43:59.554216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.554591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.554620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.554980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.555011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.555112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.555140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.555576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.555604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.555871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.555906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.556277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.556306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.556736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.556775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.557237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.557266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.557612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.557640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.558011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.558041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.558433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.558463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.558837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.558868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.559237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.559266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.559504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.559533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.559956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.559986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.560379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.560407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.560658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.560687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.561060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.561091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.561456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.561485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.561858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.561889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.562272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.562302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.562690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.562719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.563096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.563126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.563420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.563449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.563823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.563854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.564105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.564137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.564525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.564554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.564602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:41.522 [2024-11-20 07:43:59.564932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.564963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.565348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.565377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.565617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.565650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.565892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.565923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.566301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.566330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.566698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.566726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.567161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.522 [2024-11-20 07:43:59.567191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.522 qpair failed and we were unable to recover it. 00:29:41.522 [2024-11-20 07:43:59.567569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.567598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.567979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.568010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.568367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.568396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.568780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.568811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.569182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.569211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.569596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.569628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.569977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.570008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.570395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.570425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.570802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.570831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.571208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.571237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.571617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.571647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.572019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.572051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.572411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.572440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.572784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.572816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.573117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.573147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.573525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.573553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.573861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.573893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.574288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.574318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.574695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.574724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.575106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.575136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.575408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.575437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.575791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.575822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.576225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.576254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.576530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.576565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.576829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.576860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.577220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.577249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.577611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.577642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.577907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.577940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.578323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.578354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.578729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.578769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.579135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.579164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.579609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.579638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.579881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.579911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.523 qpair failed and we were unable to recover it. 00:29:41.523 [2024-11-20 07:43:59.580290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.523 [2024-11-20 07:43:59.580319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.580683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.580712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.581096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.581128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.581360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.581392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.581770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.581802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.582033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.582066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.582429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.582458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.582831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.582862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.583141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.583170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.583557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.583585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.583994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.584024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.584388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.584417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.584775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.584805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.585231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.585259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.585479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.585512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.585788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.585817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.586171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.586200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.586562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.586591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.586998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.587028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.587143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.587174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.587434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.587462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.587834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.587864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.588269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.588298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.588651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.588680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.589128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.589157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.589518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.589546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.589805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.589834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.590194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.590222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.590583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.590612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.590979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.591008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.591383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.591417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.591676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.591705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.592123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.592153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.592386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.592418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.524 qpair failed and we were unable to recover it. 00:29:41.524 [2024-11-20 07:43:59.592785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.524 [2024-11-20 07:43:59.592816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.593237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.593265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.593609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.593637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.593990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.594020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.594279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.594307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.594688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.594716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.594985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.595015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.595209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.595240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.595608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.595639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.596013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.596043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.596404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.596435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.596795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.596826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.597188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.597217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.597581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.597609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.597761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.597790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.598165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.598194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.598563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.598591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.598855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.598888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.599104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.599136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.599483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.599512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.599759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.599789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.600088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.600115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.600382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.600409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.600628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.600657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.600916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.600951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.601314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.601342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.601727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.601768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.602029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.602058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.602464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.602492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.602850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.602879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.525 qpair failed and we were unable to recover it. 00:29:41.525 [2024-11-20 07:43:59.603255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.525 [2024-11-20 07:43:59.603283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.603519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.603547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.603788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.603818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.604191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.604219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.604591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.604620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.604972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.605002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.605366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.605402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.605737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.605788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.606005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.606036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.606294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.606322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.606677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.606706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.607078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.607107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.607473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.607501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.607796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.607826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.608164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.608192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.608512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.608540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.608784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.608815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.609159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.609187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.609528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.609555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.609795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.609824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.610085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.610117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.610499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.610529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.610897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.610927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.611238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.611266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.611627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.611655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.611994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.612023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.526 qpair failed and we were unable to recover it. 00:29:41.526 [2024-11-20 07:43:59.612398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.526 [2024-11-20 07:43:59.612427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.612800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.612830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.613234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.613262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.613620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.613649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.613907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.613936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.614075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.614105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.614556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.614586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.614841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.614872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.615235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.615264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.615507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.615538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.615668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.615698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.616089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.616120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.616388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.616417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.616651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.616681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.617092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.617122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.617486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.617515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.617811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.617840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.618092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.527 [2024-11-20 07:43:59.618140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.527 [2024-11-20 07:43:59.618148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.527 [2024-11-20 07:43:59.618156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.527 [2024-11-20 07:43:59.618163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.527 [2024-11-20 07:43:59.618197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.618226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.618606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.618641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.618882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.618914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.619300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.619329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.619695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.619723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.620141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.620171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.620189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:41.527 [2024-11-20 07:43:59.620445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.620474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.620560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:41.527 [2024-11-20 07:43:59.620713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:41.527 [2024-11-20 07:43:59.620715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:41.527 [2024-11-20 07:43:59.620885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.620916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.621312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.621340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.621602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.621630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.621993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.622023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.622275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.622303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.622529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.527 [2024-11-20 07:43:59.622561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.527 qpair failed and we were unable to recover it. 00:29:41.527 [2024-11-20 07:43:59.622922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.622952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.623378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.623407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.623674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.623702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.623830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.623860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.623989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.624021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.624372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.624400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.624779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.624809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.625194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.625222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.625570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.625599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.625885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.625914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.626180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.626208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.626485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.626512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.626889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.626919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.627273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.627302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.627671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.627700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.627944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.627973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.628105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.628132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.628407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.628435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.628822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.628853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.629084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.629114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.629392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.629420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.629651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.629679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.630052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.630081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.630338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.630366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.630754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.630785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.631096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.631123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.631387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.631415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.631779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.631815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.632172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.632201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.632580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.632608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.632875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.632905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.528 [2024-11-20 07:43:59.633276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.528 [2024-11-20 07:43:59.633305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.528 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.633680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.633708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.634107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.634138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.634512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.634540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.634806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.634836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.635194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.635222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.635335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.635361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.635733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.635771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.635982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.636012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.636383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.636412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.636684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.636713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.636959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.636989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.637355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.637383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.637762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.637792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.638138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.638166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.638459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.638487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.638862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.638892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.639141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.639169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.639561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.639589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.639988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.640017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.640386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.640414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.640792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.640821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.641206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.641234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.641599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.641628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.641902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.641933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.642290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.642318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.642684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.642714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.643081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.643112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.643483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.643511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.643771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.643804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.644195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.644223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.644603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.644631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.645003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.529 [2024-11-20 07:43:59.645032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.529 qpair failed and we were unable to recover it. 00:29:41.529 [2024-11-20 07:43:59.645292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.645324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.645671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.645701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.645954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.645983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.646395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.646429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.646771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.646801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.647061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.647090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.647521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.647549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.647888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.647918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.648328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.648355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.648706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.648734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.649133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.649163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.649428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.649456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.649571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.649600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.649891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.649921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.650147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.650176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.650545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.650574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.650821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.650850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.651240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.651268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.651516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.651544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.651775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.651815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.652053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.652082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.652336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.652364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.652577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.652605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.653013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.653043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.653428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.653456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.653705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.653734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.654094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.654123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.654498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.654526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.654903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.654933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.655165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.655193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.655608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.655638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.655997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.656028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.656376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.656406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.656769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.656798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.657170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.530 [2024-11-20 07:43:59.657198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.530 qpair failed and we were unable to recover it. 00:29:41.530 [2024-11-20 07:43:59.657587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.657615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.657847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.657878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.658250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.658278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.658499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.658528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.658763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.658797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.659258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.659287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.659531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.659561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.659806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.659837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.660156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.660194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.660493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.660521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.660762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.660793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.660979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.661010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.661264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.661296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.661741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.661798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.662165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.662194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.662456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.662484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.662843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.662873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.663247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.663276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.663645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.663673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.664085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.664115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.664477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.664506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.664868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.664898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.665328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.665357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 [2024-11-20 07:43:59.665457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.531 [2024-11-20 07:43:59.665487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:41.531 qpair failed and we were unable to recover it. 00:29:41.531 Read completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Read completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Read completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Read completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Read completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Read completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Read completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Read completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Read completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Read completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Write completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Write completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Read completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Read completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Write completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Read completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Write completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Read completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Read completed with error (sct=0, sc=8) 00:29:41.531 starting I/O failed 00:29:41.531 Write completed with error (sct=0, sc=8) 00:29:41.532 starting I/O failed 00:29:41.532 Read completed with error (sct=0, sc=8) 00:29:41.532 starting I/O failed 00:29:41.532 Write completed with error (sct=0, sc=8) 00:29:41.532 starting I/O failed 00:29:41.532 Write completed with error (sct=0, sc=8) 00:29:41.532 starting I/O failed 00:29:41.532 Read completed with error (sct=0, sc=8) 00:29:41.532 starting I/O failed 00:29:41.532 Read completed with error (sct=0, sc=8) 00:29:41.532 starting I/O failed 00:29:41.532 Read completed with error (sct=0, sc=8) 00:29:41.532 starting I/O failed 00:29:41.532 Read completed with error (sct=0, sc=8) 00:29:41.532 starting I/O failed 00:29:41.532 Read completed with error (sct=0, sc=8) 00:29:41.532 starting I/O failed 00:29:41.532 Read completed with error (sct=0, sc=8) 00:29:41.532 starting I/O failed 00:29:41.532 Read completed with error (sct=0, sc=8) 00:29:41.532 starting I/O failed 00:29:41.532 Read completed with error (sct=0, sc=8) 00:29:41.532 starting I/O failed 00:29:41.532 Write completed with error (sct=0, sc=8) 00:29:41.532 starting I/O failed 00:29:41.532 [2024-11-20 07:43:59.666320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:41.532 [2024-11-20 07:43:59.666808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.666855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.666965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.666996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.667329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.667358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.667759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.667790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.668063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.668093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.668462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.668490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.668861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.668891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.669135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.669164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.669550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.669577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.670024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.670054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.670448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.670477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.670853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.670882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.671267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.671295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.671677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.671706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.672075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.672105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.672328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.672356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.672708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.672737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.673103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.673133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.673473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.673501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.673865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.673896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.674084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.674112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.674510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.674538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.674865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.674894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.675248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.675277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.675520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.675549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.532 [2024-11-20 07:43:59.675816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.532 [2024-11-20 07:43:59.675845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.532 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.676211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.676239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.676483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.676518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.676904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.676934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.677311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.677340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.677649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.677677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.677778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.677812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.678121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.678150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.678359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.678387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.678722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.678761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.678983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.679014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.679393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.679422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.679723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.679760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.680113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.680142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.680523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.680551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.681000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.681030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.681408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.681437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.681689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.681718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.682144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.682176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.682542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.682572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.682922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.682953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.683320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.683350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.683722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.683759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.684119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.684147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.684514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.684542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.684767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.684797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.685014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.685042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 [2024-11-20 07:43:59.685186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.533 [2024-11-20 07:43:59.685214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.533 qpair failed and we were unable to recover it. 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.533 starting I/O failed 00:29:41.533 Read completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Read completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Read completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Read completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Read completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Read completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Write completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Read completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Read completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Read completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Read completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Read completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Write completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Read completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Read completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Read completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Read completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Write completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Write completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Read completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Write completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Write completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Read completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 Write completed with error (sct=0, sc=8) 00:29:41.534 starting I/O failed 00:29:41.534 [2024-11-20 07:43:59.686030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:41.534 [2024-11-20 07:43:59.686486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-20 07:43:59.686546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-20 07:43:59.686824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-20 07:43:59.686881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-20 07:43:59.687295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-20 07:43:59.687325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-20 07:43:59.687691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-20 07:43:59.687720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-20 07:43:59.688091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-20 07:43:59.688123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-20 07:43:59.688514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-20 07:43:59.688542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-20 07:43:59.689023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-20 07:43:59.689125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-20 07:43:59.689585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-20 07:43:59.689623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-20 07:43:59.689844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-20 07:43:59.689876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-20 07:43:59.690161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-20 07:43:59.690190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-20 07:43:59.690431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-20 07:43:59.690459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-20 07:43:59.690712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-20 07:43:59.690741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-20 07:43:59.690930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-20 07:43:59.690960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-20 07:43:59.691325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-20 07:43:59.691352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.534 [2024-11-20 07:43:59.691721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.534 [2024-11-20 07:43:59.691759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.534 qpair failed and we were unable to recover it. 00:29:41.811 [2024-11-20 07:43:59.692013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.811 [2024-11-20 07:43:59.692044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.811 qpair failed and we were unable to recover it. 00:29:41.811 [2024-11-20 07:43:59.692479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.811 [2024-11-20 07:43:59.692507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.811 qpair failed and we were unable to recover it. 00:29:41.811 [2024-11-20 07:43:59.692869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.811 [2024-11-20 07:43:59.692899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.811 qpair failed and we were unable to recover it. 00:29:41.811 [2024-11-20 07:43:59.693188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.811 [2024-11-20 07:43:59.693216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.811 qpair failed and we were unable to recover it. 00:29:41.811 [2024-11-20 07:43:59.693568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.811 [2024-11-20 07:43:59.693595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.811 qpair failed and we were unable to recover it. 00:29:41.811 [2024-11-20 07:43:59.693842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.811 [2024-11-20 07:43:59.693876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.811 qpair failed and we were unable to recover it. 00:29:41.811 [2024-11-20 07:43:59.694117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.811 [2024-11-20 07:43:59.694148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.811 qpair failed and we were unable to recover it. 00:29:41.811 [2024-11-20 07:43:59.694400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.694428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.694648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.694676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.695032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.695063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.695202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.695235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.695616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.695644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.695891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.695921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.696289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.696317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.696694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.696721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.697003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.697032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.697407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.697435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.697688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.697717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.697995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.698025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.698367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.698394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.698621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.698649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.698861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.698890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.699148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.699177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.699563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.699597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.699957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.699986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.700376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.700405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.700769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.700798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.701200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.701229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.701615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.701644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.702033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.702063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.702321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.702349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.702711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.702739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.702977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.703006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.703368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.703396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.703773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.703803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.704146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.704174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.704524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.704553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.704920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.704950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.705269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.705296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.705675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.705703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.705934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.705964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.706227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.706259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.706598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.706626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.812 [2024-11-20 07:43:59.707002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.812 [2024-11-20 07:43:59.707032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.812 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.707398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.707425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.707663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.707692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.708064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.708095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.708460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.708489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.708853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.708883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.709202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.709231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.709350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.709377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Write completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Write completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Write completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Write completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Write completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Write completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Write completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 Read completed with error (sct=0, sc=8) 00:29:41.813 starting I/O failed 00:29:41.813 [2024-11-20 07:43:59.710183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.813 [2024-11-20 07:43:59.710553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.710619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.711074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.711180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.711523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.711561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.712031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.712136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.712600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.712639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.713058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.713162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.713496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.713534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.713804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.713836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.714255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.714284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.714665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.714694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.715073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.715105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.715334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.715363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.715867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.715899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.716253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.716281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.716657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.716686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.717058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.717088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.717302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.717330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.717702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.717731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.717955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.813 [2024-11-20 07:43:59.717985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.813 qpair failed and we were unable to recover it. 00:29:41.813 [2024-11-20 07:43:59.718246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.718275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.718478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.718507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.718903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.718933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.719299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.719327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.719689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.719719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.720093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.720123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.720496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.720524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.720771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.720802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.721163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.721192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.721563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.721591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.722024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.722055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.722410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.722438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.722660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.722688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.723133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.723169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.723387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.723416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.723778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.723807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.724187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.724215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.724591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.724619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.724991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.725022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.725391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.725419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.725572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.725599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.725950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.725980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.726339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.726367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.726582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.726610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.726854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.726884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.727256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.727284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.727498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.727527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.727906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.727935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.728299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.728327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.728709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.728737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.729097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.729126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.729374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.729402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.729784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.729817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.730239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.730269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.730637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.730665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.731050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.731080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.731382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.814 [2024-11-20 07:43:59.731411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.814 qpair failed and we were unable to recover it. 00:29:41.814 [2024-11-20 07:43:59.731780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.731809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.732156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.732184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.732397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.732425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.732804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.732834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.733030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.733064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.733301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.733330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.733715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.733743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.734114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.734143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.734508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.734537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.734907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.734937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.735307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.735335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.735710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.735738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.736113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.736142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.736526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.736555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.736925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.736954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.737311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.737339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.737566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.737601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.738011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.738041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.738415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.738443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.738894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.738924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.739150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.739178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.739573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.739601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.739964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.739993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.740366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.740395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.740763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.740792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.740990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.741019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.741266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.741298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.741458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.741487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.741880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.741910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.742283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.742311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.742534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.742563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.742675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.742705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.743043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.743074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.743436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.743464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.743838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.743868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.744093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.744121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.815 [2024-11-20 07:43:59.744369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.815 [2024-11-20 07:43:59.744398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.815 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.744790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.744820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.745174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.745201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.745550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.745577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.745957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.745986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.746362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.746390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.746613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.746641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.746871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.746900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.747152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.747179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.747444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.747472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.747848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.747878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.748255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.748283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.748520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.748548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.748905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.748934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.749318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.749345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.749722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.749795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.750155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.750183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.750414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.750442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.750879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.750908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.751261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.751288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.751673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.751707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.752092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.752123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.752381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.752409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.752778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.752808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.753038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.753066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.753308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.753339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.753717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.816 [2024-11-20 07:43:59.753762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.816 qpair failed and we were unable to recover it. 00:29:41.816 [2024-11-20 07:43:59.754115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.754144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.754367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.754395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.754596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.754623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.755056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.755086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.755459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.755488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.755838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.755867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.756245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.756273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.756641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.756670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.757057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.757086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.757460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.757489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.757856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.757885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.758257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.758285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.758516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.758548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.758790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.758818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.759191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.759219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.759593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.759621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.760002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.760031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.760268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.760299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.760433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.760461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.760728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.760766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.761039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.761070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.761286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.761314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.761598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.761626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.761855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.761885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.762245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.762273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.762508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.762535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.762742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.762780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.763168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.763196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.763583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.763610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.764011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.764040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.764408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.764437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.764701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.764733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.764890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.764921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.765143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.765179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.765325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.765353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.765613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.765642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.765979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.766010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.817 qpair failed and we were unable to recover it. 00:29:41.817 [2024-11-20 07:43:59.766359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.817 [2024-11-20 07:43:59.766387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.766779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.766810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.767188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.767217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.767571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.767600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.768001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.768032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.768396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.768425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.768844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.768873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.769227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.769254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.769481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.769509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.769769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.769802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.770194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.770222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.770475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.770504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.770871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.770901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.771270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.771299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.771686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.771714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.772142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.772175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.772524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.772553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.772925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.772954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.773272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.773300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.773650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.773679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.773783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.773811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.774178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.774208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.774571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.774599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.774827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.774860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.775151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.775181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.775426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.775458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.775831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.775862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.776270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.776297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.776668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.776696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.776996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.777025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.777268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.777296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.777544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.777573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.777967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.777997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.818 qpair failed and we were unable to recover it. 00:29:41.818 [2024-11-20 07:43:59.778221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.818 [2024-11-20 07:43:59.778253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.778614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.778642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.779004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.779034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.779386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.779422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.779644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.779675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.780042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.780071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.780445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.780473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.780841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.780870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.781257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.781284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.781534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.781565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.781924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.781953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.782331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.782358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.782729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.782768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.783149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.783179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.783476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.783504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.783669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.783699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.783875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.783905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.784153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.784182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.784629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.784657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.784908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.784939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.785318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.785346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.785571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.785599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.785995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.786024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.786384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.786413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.786760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.786789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.787184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.787213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.787430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.787459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.787824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.787854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.788096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.788125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.788497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.788525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.788623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.788651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.788844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.788936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.789077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.789108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.789432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.789461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.789595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.789631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.790040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.790072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.790325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.790354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.819 [2024-11-20 07:43:59.790800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.819 [2024-11-20 07:43:59.790830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.819 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.790978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.791006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.791409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.791436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.791815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.791844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.792095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.792123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.792366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.792395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.792719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.792761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.793096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.793125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.793361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.793390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.793644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.793671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.794065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.794095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.794473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.794501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.794871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.794900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.795304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.795332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.795714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.795741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.795990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.796023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.796382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.796410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.796622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.796650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.796901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.796930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.797283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.797311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.797697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.797725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.798115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.798145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.798338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.798366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.798744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.798785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.799141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.799169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.799534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.799561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.799948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.799978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.800207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.800235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.800622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.800649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.801023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.801053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.801404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.801431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.801797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.801826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.802204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.802232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.802602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.802630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.820 [2024-11-20 07:43:59.802987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.820 [2024-11-20 07:43:59.803016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.820 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.803249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.803277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.803642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.803671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.803898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.803927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.804170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.804198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.804538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.804566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.804919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.804947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.805318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.805346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.805718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.805766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.806147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.806175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.806396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.806423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.806781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.806810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.807142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.807170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.807543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.807576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.807816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.807850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.808218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.808247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.808691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.808718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.808977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.809006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.809243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.809273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.809532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.809559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.809823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.809857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.810082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.810110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.810473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.810501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.810869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.810899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.811103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.811130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.811507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.811535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.811952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.811982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.812345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.812373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.812717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.812752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.813053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.813082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.813353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.813380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.813616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.813643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.813865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.813894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.821 qpair failed and we were unable to recover it. 00:29:41.821 [2024-11-20 07:43:59.814241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.821 [2024-11-20 07:43:59.814269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.814646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.814673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.815049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.815079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.815441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.815471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.815846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.815874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.816262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.816290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.816662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.816690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.816947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.816982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.817357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.817384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.817765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.817797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.818143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.818171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.818541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.818570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.818808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.818838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.819058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.819086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.819318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.819346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.819634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.819662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.819895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.819924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.820315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.820343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.820720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.820754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.821124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.821152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.821575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.821602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.821974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.822004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.822367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.822395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.822628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.822655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.822915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.822945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.823330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.823358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.823691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.823718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.824079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.824108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.824494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.824522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.824869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.824898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.825285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.825313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.825619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.825648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.825775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.825805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.826055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.826083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.826467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.826495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.826813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.826841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.827258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.827285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.827657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.822 [2024-11-20 07:43:59.827685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.822 qpair failed and we were unable to recover it. 00:29:41.822 [2024-11-20 07:43:59.828036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.828065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.828424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.828451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.828833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.828862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.829234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.829261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.829633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.829660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.830046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.830074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.830346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.830375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.830763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.830792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.831148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.831176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.831546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.831574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.831802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.831832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.832201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.832228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.832324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.832350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.832839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.832941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.833249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.833289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.833676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.833707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.834263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.834365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.834654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.834692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.835088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.835121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.835393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.835423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.835778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.835809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.836055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.836084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.836331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.836360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.836724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.836760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.837152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.837182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.837464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.837499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.837681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.837710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.837974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.838004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.838365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.838394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.838766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.838796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.839008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.839037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.839411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.839439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.839689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.839722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.823 [2024-11-20 07:43:59.839953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.823 [2024-11-20 07:43:59.839982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.823 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.840346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.840374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.840761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.840791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.841009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.841038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.841298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.841332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.841693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.841722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.841947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.841977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.842246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.842281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.842550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.842578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.842956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.842985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.843361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.843389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.843770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.843800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.844201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.844229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.844617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.844646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.844815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.844845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.845102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.845131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.845382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.845411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.845632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.845669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.845975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.846004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.846237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.846265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.846703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.846732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.847103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.847131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.847498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.847526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.847896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.847925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.848295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.848323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.848697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.848726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.849112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.849142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.849514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.849543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.849921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.849952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.850210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.850238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.850682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.850711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.851001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.851032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.851248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.851277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.851506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.851538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.851717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.851761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.852117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.852146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.852524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.852554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.824 qpair failed and we were unable to recover it. 00:29:41.824 [2024-11-20 07:43:59.852920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.824 [2024-11-20 07:43:59.852951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.853402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.853430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.853777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.853808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.854216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.854245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.854507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.854535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.854772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.854801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.855183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.855212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.855430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.855458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.855840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.855869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.856218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.856247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.856484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.856511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.856752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.856783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.857173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.857202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.857432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.857460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.857814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.857845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.858204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.858233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.858484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.858512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.858889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.858918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.859291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.859319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.859567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.859595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.860036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.860071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.860280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.860309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.860687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.860715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.861126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.861156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.861527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.861555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.861654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.861682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.861939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fddf30 is same with the state(6) to be set 00:29:41.825 [2024-11-20 07:43:59.862306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.862356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.862600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.862630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.862973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.863005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.863115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.863142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.863416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.863444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.863832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.863861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.864092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.864120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.864503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.864533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.864923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.864952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.865414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.865442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.865790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.865820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.825 qpair failed and we were unable to recover it. 00:29:41.825 [2024-11-20 07:43:59.866170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.825 [2024-11-20 07:43:59.866198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.866575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.866603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.867017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.867048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.867308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.867342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.867707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.867737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.868030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.868061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.868309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.868337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.868567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.868595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.868933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.868962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.869323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.869350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.869613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.869641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.869891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.869922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.870346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.870374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.870707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.870734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.871117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.871145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.871493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.871521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.871891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.871921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.872264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.872293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.872670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.872698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.873079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.873108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.873227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.873259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.873598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.873626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.873976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.874007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.874376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.874411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.874651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.874680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.875040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.875068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.875445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.875472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.875849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.875880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.876238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.876266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.876637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.876664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.877043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.877072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.877292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.877320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.877671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.826 [2024-11-20 07:43:59.877701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.826 qpair failed and we were unable to recover it. 00:29:41.826 [2024-11-20 07:43:59.878073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.878103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.878478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.878507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.878876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.878904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.879133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.879161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.879542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.879571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.879946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.879975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.880350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.880378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.880772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.880801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.881029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.881056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.881422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.881451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.881868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.881899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.882131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.882159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.882354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.882382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.882822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.882851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.883069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.883096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.883463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.883490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.883829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.883858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.884250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.884284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.884670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.884697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.885094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.885123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.885342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.885370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.885721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.885758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.886011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.886039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.886379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.886407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.886788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.886816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.887193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.887220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.887424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.887451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.887830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.887859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.888083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.888112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.888477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.888504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.888815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.888844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.889077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.889107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.889402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.889429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.889638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.889665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.890037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.890067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.890445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.890473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.890704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.890732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.891105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.827 [2024-11-20 07:43:59.891133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.827 qpair failed and we were unable to recover it. 00:29:41.827 [2024-11-20 07:43:59.891404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.891431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.891689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.891720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.891984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.892014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.892391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.892418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.892664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.892695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.892936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.892965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.893288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.893315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.893691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.893718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.893980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.894010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.894369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.894397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.894768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.894799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.895150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.895178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.895548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.895575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.895787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.895817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.896042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.896069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.896428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.896455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.896704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.896731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.897012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.897041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.897428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.897455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.897705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.897732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.898116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.898145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.898519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.898547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.898765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.898795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.899041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.899069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.899434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.899461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.899713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.899741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.900109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.900139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.900515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.900543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.900891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.900922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.901322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.901351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.901739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.901776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.901952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.901980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.902375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.902404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.902785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.902815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.903250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.903279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.903671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.903699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.903961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.903990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.828 [2024-11-20 07:43:59.904205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.828 [2024-11-20 07:43:59.904233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.828 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.904614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.904641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.905098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.905128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.905495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.905523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.905912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.905941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.906198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.906225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.906579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.906608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.907011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.907040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.907269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.907296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.907687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.907714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.908100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.908136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.908494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.908521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.908764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.908793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.909157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.909184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.909417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.909445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.909675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.909711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.910159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.910190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.910423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.910451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.910663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.910691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.911076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.911105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.911460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.911487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.911860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.911890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.912331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.912359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.912623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.912651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.913011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.913040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.913406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.913433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.913698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.913726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.913918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.913946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.914152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.914180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.914543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.914571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.914951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.914980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.915368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.915395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.915624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.915651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.916014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.916043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.916260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.916287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.916641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.916668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.916907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.916941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.917149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.917182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.829 [2024-11-20 07:43:59.917424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.829 [2024-11-20 07:43:59.917452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.829 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.917817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.917847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.918227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.918255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.918629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.918656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.918896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.918929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.919284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.919311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.919686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.919713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.920067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.920096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.920473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.920501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.920816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.920845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.921164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.921193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.921415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.921444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.921813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.921842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.922088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.922116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.922568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.922596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.922696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.922724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.923129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.923159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.923386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.923413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.923851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.923880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.924264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.924292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.924672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.924700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.925063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.925092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.925461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.925489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.925870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.925900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.926286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.926315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.926682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.926710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.927091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.927133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.927351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.927380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.927764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.927793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.928174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.928202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.928413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.928441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.928698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.928727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.928955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.928984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.929357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.929385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.830 [2024-11-20 07:43:59.929764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.830 [2024-11-20 07:43:59.929794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.830 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.930168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.930198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.930456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.930489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.930584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.930610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.930859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.930888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.931269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.931298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.931668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.931697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.932065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.932094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.932331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.932359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.932741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.932780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.933024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.933057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.933438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.933466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.933708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.933740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.934131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.934162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.934505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.934533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.934890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.934921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.935305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.935334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.935714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.935742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.936119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.936148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.936519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.936548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.936910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.936939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.937370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.937398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.937740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.937789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.938167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.938195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.938586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.938614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.938766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.938795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.939139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.939167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.939535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.939563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.939827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.939857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.940108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.940135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.940445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.940473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.940846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.940875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.941096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.941124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.941374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.941402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.941773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.941802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.942045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.942074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.942322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.942351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.942575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.942603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.831 qpair failed and we were unable to recover it. 00:29:41.831 [2024-11-20 07:43:59.942871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.831 [2024-11-20 07:43:59.942901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.943123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.943151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.943383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.943410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.943640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.943669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.944038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.944066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.944460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.944488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.944908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.944937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.945310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.945337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.945599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.945627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.945727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.945774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.945921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.945949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.946315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.946343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.946570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.946598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.946942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.946971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.947353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.947382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.947651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.947683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.947935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.947964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.948335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.948363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.948751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.948781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.949161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.949189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.949563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.949591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.949704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.949734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.950142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.950177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.950532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.950560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.950910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.950939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.951309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.951337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.951715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.951743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.952152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.952183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.952429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.952457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.952831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.952860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.953294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.953322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.953573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.953601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.953972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.954001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.954371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.954399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.954772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.954801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.955029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.955058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.955349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.955377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.955753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.832 [2024-11-20 07:43:59.955782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.832 qpair failed and we were unable to recover it. 00:29:41.832 [2024-11-20 07:43:59.956006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.956034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.956252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.956279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.956507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.956536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.956874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.956903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.957182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.957209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.957596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.957623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.958058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.958087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.958432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.958460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.958834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.958863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.959255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.959283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.959651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.959679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.959925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.959960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.960328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.960356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.960726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.960762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.961174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.961201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.961422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.961449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.961809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.961838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.962220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.962248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.962695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.962722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.963028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.963058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.963424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.963452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.963829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.963857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.964238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.964266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.964504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.964533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.964953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.964981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.965242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.965275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.965641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.965669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.965884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.965913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.966133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.966160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.966570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.966598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.966984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.967012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.967381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.967410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.967794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.967823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.968170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.968199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.968402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.968431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.968788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.968816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.969067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.969095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.969466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.833 [2024-11-20 07:43:59.969494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.833 qpair failed and we were unable to recover it. 00:29:41.833 [2024-11-20 07:43:59.969786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.969816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.970066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.970094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.970464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.970492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.970865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.970894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.971276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.971304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.971677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.971705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.972071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.972100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.972470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.972497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.972870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.972899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.973279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.973307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.973682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.973710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.973901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.973930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.974159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.974190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.974437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.974465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.974830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.974860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.975263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.975291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.975505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.975533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.975777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.975805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.976035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.976062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.976315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.976346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.976567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.976596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.976938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.976968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.977178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.977206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.977428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.977456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.977828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.977857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.978218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.978246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.978616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.978644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.978869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.978899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.979169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.979196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.979561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.979589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.979956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.979986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.980360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.980387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.980772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.980802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.981107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.981134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.981476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.981504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.981882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.981911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.982260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.982288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.982382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.834 [2024-11-20 07:43:59.982408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:41.834 qpair failed and we were unable to recover it. 00:29:41.834 [2024-11-20 07:43:59.982975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.983081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.983377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.983414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.983813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.983867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.984255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.984298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.984648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.984677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.985020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.985052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.985413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.985442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.985813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.985843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.986101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.986129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.986348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.986377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.986761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.986791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.987165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.987193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.987578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.987606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.988012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.988042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.988388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.988416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.988790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.988819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.989208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.989237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.989468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.989496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.989727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.989767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.990045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.990075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.990258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.990286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.990620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.990649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.991052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.991083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.991463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.991491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.991871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.991901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.992112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.992141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.992363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.992391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.992616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.992644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.993040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.993070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.993457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.993485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.993710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.993738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.993850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.993876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.994116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.835 [2024-11-20 07:43:59.994143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.835 qpair failed and we were unable to recover it. 00:29:41.835 [2024-11-20 07:43:59.994418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:43:59.994454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:43:59.994668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:43:59.994698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:43:59.994949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:43:59.994980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:43:59.995430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:43:59.995458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:43:59.995558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:43:59.995586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:43:59.995724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:43:59.995764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:43:59.996108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:43:59.996136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:43:59.996508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:43:59.996538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:43:59.996909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:43:59.996939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:43:59.997314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:43:59.997342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:43:59.997560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:43:59.997595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:43:59.997989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:43:59.998019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:43:59.998385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:43:59.998413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:43:59.998656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:43:59.998683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:43:59.999068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:43:59.999097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:43:59.999352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:43:59.999385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:43:59.999823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:43:59.999852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:44:00.000288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.836 [2024-11-20 07:44:00.000318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:41.836 qpair failed and we were unable to recover it. 00:29:41.836 [2024-11-20 07:44:00.000687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.000716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.000888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.000923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.001157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.001190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.001506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.001535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.001813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.001844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.002118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.002148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.002429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.002460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.002848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.002880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.003581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.003616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.003896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.003928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.004227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.004257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.004598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.004628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.005000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.005030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.005386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.005415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.005798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.005828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.005958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.005986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.006257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.006285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.006532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.006561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.007026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.007056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.007314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.007343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.007712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.007741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.008038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.008067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.008515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.008543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.008661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.008689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.008819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.008852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.009079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.009108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.009491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.115 [2024-11-20 07:44:00.009522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.115 qpair failed and we were unable to recover it. 00:29:42.115 [2024-11-20 07:44:00.009794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.009828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.010253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.010282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.010664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.010694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.011066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.011097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.011357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.011390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.011763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.011802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.012067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.012095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.012481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.012510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.012661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.012689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.012865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.012894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.013284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.013313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.013690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.013718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.014114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.014143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.014515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.014544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.014904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.014935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.015182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.015212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.015584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.015613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.015838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.015868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.016253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.016281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.016650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.016679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.017068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.017098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.017482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.017510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.017875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.017905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.018292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.018323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.018666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.018696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.018923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.018952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.019303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.019332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.019704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.019734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.020106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.020134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.020506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.020536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.020764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.020795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.021004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.021032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.021386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.021416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.021796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.021826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.022204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.022232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.022463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.022491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.022875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.022905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.116 [2024-11-20 07:44:00.023364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.116 [2024-11-20 07:44:00.023392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.116 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.023652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.023680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.023865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.023894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.024026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.024054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.024394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.024423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.024679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.024707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.024969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.024999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.025366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.025394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.025671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.025705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.025946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.025977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.026104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.026132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.026352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.026380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.026625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.026654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.026943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.026976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.027123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.027152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.027301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.027329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.027545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.027572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.027774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.027804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.028051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.028080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.028452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.028480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.028725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.028760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.029030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.029058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.029300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.029328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.029701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.029730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.029950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.029979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.030240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.030268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.030550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.030579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.030888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.030918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.031145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.031173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.031505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.031533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.031875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.031904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.032285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.032313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.032690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.032718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.033088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.033117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.033399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.033427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.033799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.033829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.034186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.034214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.034483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.117 [2024-11-20 07:44:00.034511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.117 qpair failed and we were unable to recover it. 00:29:42.117 [2024-11-20 07:44:00.034881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.034911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.035210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.035238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.035376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.035409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.035784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.035815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.036110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.036139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.036330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.036360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.036736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.036772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.037205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.037234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.037446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.037474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.037841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.037870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.038230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.038265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.038613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.038642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.038991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.039021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.039388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.039417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.039792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.039822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.040222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.040250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.040476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.040504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.040890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.040919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.041136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.041164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.041437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.041468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.041859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.041888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.042153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.042181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.042574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.042602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.042701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.042728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.043194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.043303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.044041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.044147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.044451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.044488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.044727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.044775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.045000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.045032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.045409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.045438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.045760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.045790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.046129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.046158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.046407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.046435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.046811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.046842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.047194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.047223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.047568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.047597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.047880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.047912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.118 qpair failed and we were unable to recover it. 00:29:42.118 [2024-11-20 07:44:00.048303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.118 [2024-11-20 07:44:00.048335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.048685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.048714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.049097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.049127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.049505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.049534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.049894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.049924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.050298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.050327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.050699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.050728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.050984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.051013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.051373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.051401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.051653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.051687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.052069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.052101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.052324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.052354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.052756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.052786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.053174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.053210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.053473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.053502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.053604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.053630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.053967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.053997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.054266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.054294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.054526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.054555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.054941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.054970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.055292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.055320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.055546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.055578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.055895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.055925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.056187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.056216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.056482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.056511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.056764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.056794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.057171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.057198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.057570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.057599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.058045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.058075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.058213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.058240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.058614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.058642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.059003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.059032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-11-20 07:44:00.059298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-11-20 07:44:00.059329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.059549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.059578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.059913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.059941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.060357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.060385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.060614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.060643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.060918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.060949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.061329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.061356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.061722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.061761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.062026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.062072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.062271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.062310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.062592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.062635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.062955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.063003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.063236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.063282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.063597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.063638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.063903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.063950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.064303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.064366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.064848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.064883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.065231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.065260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.065512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.065540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.065963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.065994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.066351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.066380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.066609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.066647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.067132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.067163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.067538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.067568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.067980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.068010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.068366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.068395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.068617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.068647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.068888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.068918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.069298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.069327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.069701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.069730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.070201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.070230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.070548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.070576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.070818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.070849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.071093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.071122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.071245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.071276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.071657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.071687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.071887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.071920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.072198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-11-20 07:44:00.072227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-11-20 07:44:00.072586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.072614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.073043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.073073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.073446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.073475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.073571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.073598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.074179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.074287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.074766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.074805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.075263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.075368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.075783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.075823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.076226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.076257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.076492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.076522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.076808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.076864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.077115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.077145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.077559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.077588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.077817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.077848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.078231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.078260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.078668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.078697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.078912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.078942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.079346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.079375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.079742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.079784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.080140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.080168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.080615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.080643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.080887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.080917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.081170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.081206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.081458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.081487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.081867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.081897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.082271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.082300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.082570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.082599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.082732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.082775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.083040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.083070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.083447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.083476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.083712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.083742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.084122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.084151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.084367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.084396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.084761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.084792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.085006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.085035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.085266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.085296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.085517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.085547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.085713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.085763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-11-20 07:44:00.086017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-11-20 07:44:00.086051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.086430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.086459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.086837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.086868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.087119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.087148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.087318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.087346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.087791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.087821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.088237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.088266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.088636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.088665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.088914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.088944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.089319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.089348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.089716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.089766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.089915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.089950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.090328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.090356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.090741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.090783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.091137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.091166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.091542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.091570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.092015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.092045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.092258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.092288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.092514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.092542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.092906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.092936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.093294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.093323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.093617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.093645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.093887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.093921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.094275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.094304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.094584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.094612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.094943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.094972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.095359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.095394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.095742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.095797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.096046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.096075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.096441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.096470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.096742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.096781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.097105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.097133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.097502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.097532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.097806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.097836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.098088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.098121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.098504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.098532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.098878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.098907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.099193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-11-20 07:44:00.099221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-11-20 07:44:00.099440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.099469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.099833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.099862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.100144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.100172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.100262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.100289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0010 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.100909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.101014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.101457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.101496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.102026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.102129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.102447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.102486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.103026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.103131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.103437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.103474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.103862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.103897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.104284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.104315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.104580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.104608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.104853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.104885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.105281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.105310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.105625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.105668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.105907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.105939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.106172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.106206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.106581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.106610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.106958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.106991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.107206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.107235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.107496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.107526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.107889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.107920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.108315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.108345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.108733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.108776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.108990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.109019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.109276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.109305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.109516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.109543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.109721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.109776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.110141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.110171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.110556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.110584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.110912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.110943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.111306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.111334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.111719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.111759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.111988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.112017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.112288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.112321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.112570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.112598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-11-20 07:44:00.112953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-11-20 07:44:00.112984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.113089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.113118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.113469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.113498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.113729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.113769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.114140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.114170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.114542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.114571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.114927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.114956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.115352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.115380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.115770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.115800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.116171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.116199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.116648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.116676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.117061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.117091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.117454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.117484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.117852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.117881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.118233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.118262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.118492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.118520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.118793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.118822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.119195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.119223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.119441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.119478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.119826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.119856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.120242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.120270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.120486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.120514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.120763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.120792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.121065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.121093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.121437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.121465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.121844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.121873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.122139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.122167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.122383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.122411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.122637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.122666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.123013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.123041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.123393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.123421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.123802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.123833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.124194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.124223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.124603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.124630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-11-20 07:44:00.125003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-11-20 07:44:00.125033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.125434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.125463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.125823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.125853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.126222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.126250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.126640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.126668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.126927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.126955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.127332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.127361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.127642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.127672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.127919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.127948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.128188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.128217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.128523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.128551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.128847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.128877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.129177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.129206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.129458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.129486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.129603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.129630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.130005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.130035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.130292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.130320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.130466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.130492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.130819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.130848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.130986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.131013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.131232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.131259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.131510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.131538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.131667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.131695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.131978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.132009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.132284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.132320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.132578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.132609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.132789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.132819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.133029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.133059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.133401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.133429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.133801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.133832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.134257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.134286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.134563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.134591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.134726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.134762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.134877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.134910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.135328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.135356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.135569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.135598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.135840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.135870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.136270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-11-20 07:44:00.136298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-11-20 07:44:00.136510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.136540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.136935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.136965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.137236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.137264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.137614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.137642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.138006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.138036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.138421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.138448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.138679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.138707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.139114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.139145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.139482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.139510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.139871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.139900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.140128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.140156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.140400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.140428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.140772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.140802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.141030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.141060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.141444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.141472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.141730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.141766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.142018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.142046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.142295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.142326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.142709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.142736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.143137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.143168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.143530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.143557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.143988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.144018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.144361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.144389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.144709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.144737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.145109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.145138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.145389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.145418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.145789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.145825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.146195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.146224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.146358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.146390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.146608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.146636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.147018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.147048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.147361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.147389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.147649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.147677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.147932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.147961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.148202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.148231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.148466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.148494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.148772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.148801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-11-20 07:44:00.149025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-11-20 07:44:00.149053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.149193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.149220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.149473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.149501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.149620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.149651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.150063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.150093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.150396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.150425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.150805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.150834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.151184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.151211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.151336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.151363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.151698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.151726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.152147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.152177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.152409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.152437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.152651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.152678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.153064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.153093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.153460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.153490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.153857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.153887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.154231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.154261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.154648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.154677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.154946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.154980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.155362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.155390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.155612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.155641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.155902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.155932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.156317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.156345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.156560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.156588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.156969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.156998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.157378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.157405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.157770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.157799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.158139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.158168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.158425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.158457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.158860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.158901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.159274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.159303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.159675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.159703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.160080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.160109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.160361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.160390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.160634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.160665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.160904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.160933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.161179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.161207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-11-20 07:44:00.161558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-11-20 07:44:00.161586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.161845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.161876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.162126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.162154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.162531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.162558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.162778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.162807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.163213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.163243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.163485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.163514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.163974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.164003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.164210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.164238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.164461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.164489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.164713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.164755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.164973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.165001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.165310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.165339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.165435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.165462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.165718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.165754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.166141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.166170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.166433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.166463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.166823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.166852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.167239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.167267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.167532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.167562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.168002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.168031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.168403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.168432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.168656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.168685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.168948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.168979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.169372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.169401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.169639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.169667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.170051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.170081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.170507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.170535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.170928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.170957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.171241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.171269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.171506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.171534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.171925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.171956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.172206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.172240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.172615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.172644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.172997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.173027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.173392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.173420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.173646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.173674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.174041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-11-20 07:44:00.174070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-11-20 07:44:00.174452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.174480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.174696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.174724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.175032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.175062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.175298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.175325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.175740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.175791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.176162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.176191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.176565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.176594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.177013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.177043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.177425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.177454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.177576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.177606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.178003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.178034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.178413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.178442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.178672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.178700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.178980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.179010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.179258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.179288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.179615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.179644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.179854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.179884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.180249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.180277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.180535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.180567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.180807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.180836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.180950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.180980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.181375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.181405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.181562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.181591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.182038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.182068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.182278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.182307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.182528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.182555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.182901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.182932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.183336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.183365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.183642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.183670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.184136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.184168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.184422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.184451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.184637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.184665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.184778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.184810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-11-20 07:44:00.184963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-11-20 07:44:00.184992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.185255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.185290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.185651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.185680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.185918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.185948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.186292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.186321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.186703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.186732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.187116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.187145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.187356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.187385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.187805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.187834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.188182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.188209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.188580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.188608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.188833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.188864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.189234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.189263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.189482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.189510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.189723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.189761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.190173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.190203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.190577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.190607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.190976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.191006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.191375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.191403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.191633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.191662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.191908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.191937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.192398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.192427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.192656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.192688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.193041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.193073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.193516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.193545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.193907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.193938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.194343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.194371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.194756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.194787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.195163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.195193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.195561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.195588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.195977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.196007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.196272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.196300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.196643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.196673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.196806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.196834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.196931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.196959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.197311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.197341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.197578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.197607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.197830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.197860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-11-20 07:44:00.198126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-11-20 07:44:00.198154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.198519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.198547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.198939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.198970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.199212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.199247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.199593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.199623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.199896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.199927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.200261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.200290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.200657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.200687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.201135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.201166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.201514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.201544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.201849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.201883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.202254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.202284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.202499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.202528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.202908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.202938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.203171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.203200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.203441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.203471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.203828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.203859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.204246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.204276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.204651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.204680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.205081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.205111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.205521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.205550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.205828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.205860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.206247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.206278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.206659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.206687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.207033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.207063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.207429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.207458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.207859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.207890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.208241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.208270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.208610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.208639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.208977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.209008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.209369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.209404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.209768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.209799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.210153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.210182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.210596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.210625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.210975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.211007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.211257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.211286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.211535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.211567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.211887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-11-20 07:44:00.211917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-11-20 07:44:00.212007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.212036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa11c000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.212579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.212684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.213170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.213273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.213696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.213734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.214141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.214172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.214446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.214481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.214713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.214744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.215126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.215155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.215440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.215475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.215861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.215892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.216246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.216276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.216497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.216526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.216901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.216932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.217355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.217384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.217756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.217788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.218221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.218249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.218422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.218452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.218688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.218718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.218932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.218962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.219191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.219221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.219443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.219472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.219700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.219732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.219866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.219899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.220255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.220285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.220527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.220557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.220788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.220819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.221058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.221087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.221436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.221465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.221714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.221755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.221991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.222021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.222391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.222421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.222670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.222700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.222968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.223014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.223372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.223405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.223774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.223805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.224171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.224200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.224443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.224472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.132 [2024-11-20 07:44:00.224726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.132 [2024-11-20 07:44:00.224769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.132 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.225168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.225197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.225621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.225650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.226039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.226070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.226333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.226363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.226606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.226635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.226992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.227022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.227401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.227430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.227806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.227836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.228235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.228267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.228610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.228639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.228930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.228961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.229362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.229390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.229770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.229801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.230171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.230199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.230410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.230439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.230579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.230609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.230861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.230895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.231297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.231326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.231695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.231723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.231967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.231998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.232227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.232258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.232643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.232673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.232794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.232824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.233036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.233066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.233299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.233329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.233720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.233757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.233979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.234008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.234225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.234253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.234598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.234626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.234995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.235026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.235288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.235317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.235665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.235695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.236040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.133 [2024-11-20 07:44:00.236070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.133 qpair failed and we were unable to recover it. 00:29:42.133 [2024-11-20 07:44:00.236314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.236344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.236642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.236679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.236980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.237011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.237225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.237254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.237472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.237500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.237875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.237905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.238251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.238279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.238511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.238539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.238919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.238949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.239352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.239381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.239756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.239786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.240169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.240197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.240428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.240456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.240822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.240853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.241254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.241283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.241682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.241711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.242089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.242119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.242489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.242519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.242886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.242915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.243294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.243323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.243699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.243728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.243961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.243991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.244406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.244434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.244532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.244559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.244847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.244876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.245263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.245291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.245601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.245629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.246076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.246106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.246469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.246498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.246848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.246879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.247179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.247208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.247571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.247599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.247700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.247727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.248068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.248096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.248341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.248373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.248743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.248791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.249234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.249262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.134 [2024-11-20 07:44:00.249641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.134 [2024-11-20 07:44:00.249668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.134 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.250041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.250070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.250450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.250478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.250711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.250739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.250975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.251015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.251372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.251401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.251579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.251607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.251975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.252006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.252383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.252411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.252798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.252827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.253216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.253244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.253611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.253639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.254034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.254062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.254483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.254511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.254768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.254802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.255065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.255093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.255194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.255221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.255558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.255585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.255972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.256002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.256364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.256392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.256769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.256799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.257220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.257248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.257534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.257562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.257849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.257878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.258246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.258275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.258630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.258659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.259049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.259078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.259313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.259341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.259547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.259575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.259796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.259825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.260092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.260124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.260378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.260412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.260648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.260677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.260821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.260851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.261096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.261132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.261502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.261530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.261738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.261776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.262018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.135 [2024-11-20 07:44:00.262047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.135 qpair failed and we were unable to recover it. 00:29:42.135 [2024-11-20 07:44:00.262407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.262435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.262758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.262787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.263171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.263199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.263526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.263554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.263801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.263830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.264215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.264244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.264609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.264645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.265011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.265040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.265403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.265431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.265809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.265837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.266207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.266235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.266408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.266440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.266876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.266905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.267293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.267322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.267531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.267560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.267924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.267953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.268197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.268228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.268595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.268624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.268965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.268994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.269340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.269368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.269633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.269663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.270046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.270075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.270459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.270487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.270855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.270885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.271166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.271193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.271448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.271475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.271861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.271891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.272228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.272256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.272490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.272523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.272629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.272658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.272782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.272811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.273096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.273125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.273503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.273532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.273806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.273838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.274091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.274119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.274375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.274404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.274843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.274872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.136 [2024-11-20 07:44:00.275226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.136 [2024-11-20 07:44:00.275254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.136 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.275503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.275530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.275901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.275930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.276298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.276327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.276548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.276575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.276848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.276882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.277288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.277316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.277405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.277432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.277545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.277577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.277995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.278032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.278245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.278273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.278508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.278535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.278917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.278946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.279179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.279206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.279420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.279448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.279655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.279684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.279980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.280009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.280103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.280130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.280338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.280366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.280773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.280802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.281217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.281246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.281508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.281537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.281783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.281812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.282160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.282189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.282562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.282591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.282727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.282775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.283186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.283214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.283581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.283609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.283935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.283966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.284364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.284392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.284736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.284774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.285023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.285051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.285322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.285350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.285733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.285768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.286050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.286078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.286196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.286222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.286606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.286634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.286890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.286923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.137 qpair failed and we were unable to recover it. 00:29:42.137 [2024-11-20 07:44:00.287302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.137 [2024-11-20 07:44:00.287332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.287703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.287731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.288079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.288109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.288371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.288401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:42.138 [2024-11-20 07:44:00.288777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.288809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:42.138 [2024-11-20 07:44:00.289181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.289210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:42.138 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:42.138 [2024-11-20 07:44:00.289579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.289607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.138 [2024-11-20 07:44:00.289987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.290016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.290234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.290262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.290631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.290671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.290840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.290870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.291143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.291171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.291425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.291454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.291796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.291825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.292187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.292216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.292587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.292616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.292976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.293006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.293411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.293439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.293782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.293811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.294158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.294187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.294557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.294587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.294816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.294846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.295097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.295124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.295353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.295383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.295528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.295561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.295799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.295829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.296082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.296113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.296494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.296522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.296769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.296799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.297148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.297177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.297556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.297584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.297849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.297880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.138 qpair failed and we were unable to recover it. 00:29:42.138 [2024-11-20 07:44:00.298235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.138 [2024-11-20 07:44:00.298263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-20 07:44:00.298603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-20 07:44:00.298632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-20 07:44:00.298982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-20 07:44:00.299013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-20 07:44:00.299393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-20 07:44:00.299423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-20 07:44:00.299660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-20 07:44:00.299689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-20 07:44:00.299990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-20 07:44:00.300019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-20 07:44:00.300361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-20 07:44:00.300390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-20 07:44:00.300770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-20 07:44:00.300801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-20 07:44:00.301169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-20 07:44:00.301200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-20 07:44:00.301444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-20 07:44:00.301477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.139 [2024-11-20 07:44:00.301869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.139 [2024-11-20 07:44:00.301899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.139 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.302276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.302306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.302562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.302591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.302893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.302924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.303104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.303138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.303381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.303412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.303657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.303686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.304069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.304107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.304203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.304230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.304639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.304668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.304774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.304805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.304952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.304983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.305332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.305362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.305732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.305772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.306152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.306180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.306450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.306480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.306735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.306788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.307007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.307034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.307427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.307455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.307706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.307739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.308116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.308144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.308382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.308413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.308653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.308682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.308927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.308957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.309372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.309400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.309490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.309516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa118000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.309982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.310091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.310432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.310470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.310586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.310615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.310857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.310890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.311115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.311145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.311539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.311569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.311852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.311882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.312145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.312174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.312559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.312591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.312841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.409 [2024-11-20 07:44:00.312873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.409 qpair failed and we were unable to recover it. 00:29:42.409 [2024-11-20 07:44:00.313258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.313287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.313535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.313565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.313952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.313982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.314352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.314381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.314639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.314668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.314944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.314977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.315218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.315248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.315478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.315507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.315729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.315771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.316213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.316242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.316674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.316703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.317089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.317127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.317417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.317452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.317853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.317884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.318154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.318183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.318360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.318390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.318812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.318843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.319214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.319242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.319634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.319667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.319994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.320023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.320410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.320439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.320815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.320844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.321228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.321258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.321679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.321707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.322106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.322136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.322451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.322480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.322724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.322773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.323157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.323187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.323432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.323466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.323693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.323721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.324141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.324172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.324420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.324449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.324891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.324922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.325153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.325182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.325513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.325542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.325777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.325810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.326001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.410 [2024-11-20 07:44:00.326030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.410 qpair failed and we were unable to recover it. 00:29:42.410 [2024-11-20 07:44:00.326286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.326315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.326695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.326726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.327124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.327155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.327546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.327575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.327964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.327993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.328363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.328394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.328612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.328640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.329010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.329041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.329406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.329438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.329866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.329895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.330268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.330296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.330665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.330694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.331121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.331152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.331499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.331527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.331891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.331927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.411 [2024-11-20 07:44:00.332314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.332345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.332689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:42.411 [2024-11-20 07:44:00.332718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.332982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.333014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.333269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.411 [2024-11-20 07:44:00.333299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.333644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.333674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.334044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.334075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.334306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.334337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.334559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.334587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.334821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.334849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.335199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.335227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.335593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.335621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.336019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.336050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.336417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.336445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.336825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.336854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.337222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.337250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.337622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.337651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.338014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.338044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.338416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.338444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.338665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.338694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.339092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.339121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.411 [2024-11-20 07:44:00.339363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.411 [2024-11-20 07:44:00.339392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.411 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.339776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.339805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.340060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.340088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.340194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.340224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.340564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.340594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.340860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.340889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.341127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.341155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.341520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.341548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.341924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.341954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.342319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.342347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.342604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.342632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.342725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.342758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.343093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.343121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.343474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.343502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.343769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.343800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.344172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.344200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.344571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.344599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.344871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.344906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.345179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.345211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.345559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.345587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.345815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.345853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.346160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.346188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.346561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.346588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.346831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.346860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.347230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.347257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.347627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.347655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.347939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.347968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.348351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.348378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.348760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.348789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.349233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.349261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.349479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.349506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.349771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.349803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.350186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.350215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.350475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.350503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.350732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.350769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.351116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.351144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.351503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.351530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.351896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.412 [2024-11-20 07:44:00.351927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.412 qpair failed and we were unable to recover it. 00:29:42.412 [2024-11-20 07:44:00.352184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.352212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.352419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.352446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.352806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.352835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.353207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.353235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.353487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.353514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.353765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.353795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.354090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.354123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.354359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.354388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.354625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.354658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.355029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.355059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.355424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.355452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.355604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.355641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.356045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.356074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.356445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.356472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.356847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.356876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.357246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.357274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.357639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.357667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.358039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.358068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.358423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.358451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.358910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.358946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.359329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.359357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.359725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.359795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.360157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.360185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.360555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.360583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.360801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.360831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.361215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.361244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.361625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.361652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.361944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.361973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.362343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.362371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.362784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.362813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.363091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.363119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.413 [2024-11-20 07:44:00.363384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.413 [2024-11-20 07:44:00.363412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.413 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.363833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.363863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.364249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.364278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.364678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.364706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.365067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.365096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.365465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.365494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.365888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.365917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.366289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.366317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.366545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.366573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.366971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.367001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.367372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.367400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.367827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.367855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.368108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.368136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.368505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.368533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.368795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.368824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.369211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.369240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.369473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.369501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.369900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.369929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.370299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.370327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.370694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.370723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.371016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.371044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.371267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.371295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.371674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.371702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.371913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.371942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.372167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.372194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.372645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.372673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 Malloc0 00:29:42.414 [2024-11-20 07:44:00.373082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.373111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.373560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.373588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.414 [2024-11-20 07:44:00.373977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.374006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:42.414 [2024-11-20 07:44:00.374401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.374429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.414 [2024-11-20 07:44:00.374804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.374833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.375297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.375326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.375457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.375485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.375867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.375896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.376223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.376250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.376639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.376666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.414 [2024-11-20 07:44:00.376922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.414 [2024-11-20 07:44:00.376952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.414 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.377313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.377343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.377579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.377613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.377842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.377870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.378266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.378294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.378668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.378696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.378973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.379003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.379219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.379247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.379631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.379658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.380091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.380120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.380330] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.415 [2024-11-20 07:44:00.380490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.380517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.380738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.380776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.381139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.381166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.381540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.381569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.381925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.381954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.382328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.382357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.382700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.382728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.382901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.382932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.383216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.383244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.383492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.383520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.383898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.383926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.384115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.384143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.384517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.384545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.384976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.385004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.385379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.385406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.385788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.385818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.386212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.386240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.386673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.386701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.387193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.387223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.387597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.387624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.388043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.388078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.388438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.388467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.388861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.388889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 [2024-11-20 07:44:00.389208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.389236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.415 [2024-11-20 07:44:00.389599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.389627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:42.415 [2024-11-20 07:44:00.389994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.390023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.415 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.415 [2024-11-20 07:44:00.390257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.415 [2024-11-20 07:44:00.390286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.415 qpair failed and we were unable to recover it. 00:29:42.416 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.416 [2024-11-20 07:44:00.390517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.390547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.390779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.390808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.391031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.391066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.391431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.391458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.391690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.391717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.392104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.392134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.392496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.392525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.392898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.392927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.393174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.393202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.393552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.393579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.393942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.393971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.394211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.394239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.394608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.394637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.394979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.395008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.395376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.395404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.395780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.395808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.396062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.396090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.396194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.396220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.396443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.396471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.396841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.396869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.397101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.397129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.397502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.397529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.397897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.397926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.398200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.398228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.398611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.398640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.399047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.399076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.399320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.399348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.399730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.399767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.400140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.400168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.400397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.400425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.400772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.400801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 [2024-11-20 07:44:00.401175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.401208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.416 [2024-11-20 07:44:00.401586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.401614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:42.416 [2024-11-20 07:44:00.401842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.401874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.416 [2024-11-20 07:44:00.402147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.416 [2024-11-20 07:44:00.402175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.416 qpair failed and we were unable to recover it. 00:29:42.416 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.416 [2024-11-20 07:44:00.402396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.402426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.402752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.402783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.403001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.403029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.403278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.403307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.403533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.403565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.403803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.403834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.404072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.404100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.404314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.404343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.404709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.404738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.405140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.405169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.405477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.405506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.405739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.405782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.406057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.406091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.406445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.406475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.406724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.406766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.407014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.407043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.407402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.407430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.407642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.407674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.407832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.407865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.408104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.408133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.408500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.408529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.408895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.408933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.409292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.409321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.409418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.409446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.409820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.409850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.410265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.410294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.410662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.410691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.411075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.411105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.411330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.411358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.411786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.411818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.412191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.412220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.412588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.412617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.412878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.412909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 [2024-11-20 07:44:00.413312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.413340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.417 [2024-11-20 07:44:00.413707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.413743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.417 [2024-11-20 07:44:00.413998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.414028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.417 qpair failed and we were unable to recover it. 00:29:42.417 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.417 [2024-11-20 07:44:00.414263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.417 [2024-11-20 07:44:00.414291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.418 [2024-11-20 07:44:00.414625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.418 [2024-11-20 07:44:00.414655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.414890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.418 [2024-11-20 07:44:00.414920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.415346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.418 [2024-11-20 07:44:00.415374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.415730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.418 [2024-11-20 07:44:00.415768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.416217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.418 [2024-11-20 07:44:00.416246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.416476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.418 [2024-11-20 07:44:00.416504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.416875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.418 [2024-11-20 07:44:00.416905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.417095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.418 [2024-11-20 07:44:00.417124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.417504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.418 [2024-11-20 07:44:00.417534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.417771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.418 [2024-11-20 07:44:00.417801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.418021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.418 [2024-11-20 07:44:00.418048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.418394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.418 [2024-11-20 07:44:00.418421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.418779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.418 [2024-11-20 07:44:00.418808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.419187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.418 [2024-11-20 07:44:00.419215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.419587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.418 [2024-11-20 07:44:00.419614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.419887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.418 [2024-11-20 07:44:00.419920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.420436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.418 [2024-11-20 07:44:00.420464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa124000b90 with addr=10.0.0.2, port=4420 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.421188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.418 [2024-11-20 07:44:00.422056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-20 07:44:00.422197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-20 07:44:00.422249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-20 07:44:00.422272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-20 07:44:00.422292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.418 [2024-11-20 07:44:00.422369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.418 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:42.418 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.418 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.418 [2024-11-20 07:44:00.431907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-20 07:44:00.432063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-20 07:44:00.432108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-20 07:44:00.432130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-20 07:44:00.432151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.418 [2024-11-20 07:44:00.432202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.418 07:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3589952 00:29:42.418 [2024-11-20 07:44:00.441856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-20 07:44:00.441955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-20 07:44:00.441984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-20 07:44:00.441998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-20 07:44:00.442011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.418 [2024-11-20 07:44:00.442041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.451793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-20 07:44:00.451878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-20 07:44:00.451899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.418 [2024-11-20 07:44:00.451909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.418 [2024-11-20 07:44:00.451918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.418 [2024-11-20 07:44:00.451941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.418 qpair failed and we were unable to recover it. 00:29:42.418 [2024-11-20 07:44:00.461784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.418 [2024-11-20 07:44:00.461869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.418 [2024-11-20 07:44:00.461886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-20 07:44:00.461893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-20 07:44:00.461900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.419 [2024-11-20 07:44:00.461917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-20 07:44:00.471754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-20 07:44:00.471823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-20 07:44:00.471839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-20 07:44:00.471847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-20 07:44:00.471853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.419 [2024-11-20 07:44:00.471870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-20 07:44:00.481772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-20 07:44:00.481835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-20 07:44:00.481853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-20 07:44:00.481863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-20 07:44:00.481872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.419 [2024-11-20 07:44:00.481890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-20 07:44:00.491833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-20 07:44:00.491956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-20 07:44:00.491974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-20 07:44:00.491982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-20 07:44:00.491989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.419 [2024-11-20 07:44:00.492006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-20 07:44:00.502014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-20 07:44:00.502090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-20 07:44:00.502106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-20 07:44:00.502114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-20 07:44:00.502120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.419 [2024-11-20 07:44:00.502136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-20 07:44:00.511995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-20 07:44:00.512060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-20 07:44:00.512083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-20 07:44:00.512090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-20 07:44:00.512096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.419 [2024-11-20 07:44:00.512113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-20 07:44:00.522015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-20 07:44:00.522079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-20 07:44:00.522096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-20 07:44:00.522104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-20 07:44:00.522110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.419 [2024-11-20 07:44:00.522127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-20 07:44:00.532042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-20 07:44:00.532112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-20 07:44:00.532129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-20 07:44:00.532136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-20 07:44:00.532142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.419 [2024-11-20 07:44:00.532159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-20 07:44:00.542133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-20 07:44:00.542225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-20 07:44:00.542274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-20 07:44:00.542282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-20 07:44:00.542289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.419 [2024-11-20 07:44:00.542320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-20 07:44:00.552132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-20 07:44:00.552235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-20 07:44:00.552253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-20 07:44:00.552261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-20 07:44:00.552273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.419 [2024-11-20 07:44:00.552292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-20 07:44:00.562115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-20 07:44:00.562174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-20 07:44:00.562192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-20 07:44:00.562199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-20 07:44:00.562205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.419 [2024-11-20 07:44:00.562223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-20 07:44:00.572159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-20 07:44:00.572226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-20 07:44:00.572243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-20 07:44:00.572251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-20 07:44:00.572258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.419 [2024-11-20 07:44:00.572275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-20 07:44:00.582114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-20 07:44:00.582185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-20 07:44:00.582202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.419 [2024-11-20 07:44:00.582209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.419 [2024-11-20 07:44:00.582215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.419 [2024-11-20 07:44:00.582232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.419 qpair failed and we were unable to recover it. 00:29:42.419 [2024-11-20 07:44:00.592170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.419 [2024-11-20 07:44:00.592238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.419 [2024-11-20 07:44:00.592255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.420 [2024-11-20 07:44:00.592262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.420 [2024-11-20 07:44:00.592269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.420 [2024-11-20 07:44:00.592285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.420 qpair failed and we were unable to recover it. 00:29:42.420 [2024-11-20 07:44:00.602239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.420 [2024-11-20 07:44:00.602306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.420 [2024-11-20 07:44:00.602323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.420 [2024-11-20 07:44:00.602331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.420 [2024-11-20 07:44:00.602337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.420 [2024-11-20 07:44:00.602354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.420 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-20 07:44:00.612284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-20 07:44:00.612361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-20 07:44:00.612383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-20 07:44:00.612395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-20 07:44:00.612402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.683 [2024-11-20 07:44:00.612421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-20 07:44:00.622338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-20 07:44:00.622411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-20 07:44:00.622431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-20 07:44:00.622442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-20 07:44:00.622453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.683 [2024-11-20 07:44:00.622473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-20 07:44:00.632442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-20 07:44:00.632522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-20 07:44:00.632558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-20 07:44:00.632568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-20 07:44:00.632574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.683 [2024-11-20 07:44:00.632598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-20 07:44:00.642405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-20 07:44:00.642485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-20 07:44:00.642512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-20 07:44:00.642520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-20 07:44:00.642527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.683 [2024-11-20 07:44:00.642547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-20 07:44:00.652429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-20 07:44:00.652506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-20 07:44:00.652525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-20 07:44:00.652534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-20 07:44:00.652541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.683 [2024-11-20 07:44:00.652558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-20 07:44:00.662500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-20 07:44:00.662571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-20 07:44:00.662589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-20 07:44:00.662598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-20 07:44:00.662605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.683 [2024-11-20 07:44:00.662623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-20 07:44:00.672319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-20 07:44:00.672382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-20 07:44:00.672399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-20 07:44:00.672406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-20 07:44:00.672412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.683 [2024-11-20 07:44:00.672429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-20 07:44:00.682345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-20 07:44:00.682406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-20 07:44:00.682423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-20 07:44:00.682430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-20 07:44:00.682443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.683 [2024-11-20 07:44:00.682460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-20 07:44:00.692516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-20 07:44:00.692583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-20 07:44:00.692600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-20 07:44:00.692607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-20 07:44:00.692614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.683 [2024-11-20 07:44:00.692630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-20 07:44:00.702569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-20 07:44:00.702642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-20 07:44:00.702659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-20 07:44:00.702666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-20 07:44:00.702672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.683 [2024-11-20 07:44:00.702691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-20 07:44:00.712565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-20 07:44:00.712630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-20 07:44:00.712646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-20 07:44:00.712654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.683 [2024-11-20 07:44:00.712660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.683 [2024-11-20 07:44:00.712677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-11-20 07:44:00.722595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.683 [2024-11-20 07:44:00.722655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.683 [2024-11-20 07:44:00.722672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.683 [2024-11-20 07:44:00.722679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.684 [2024-11-20 07:44:00.722685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.684 [2024-11-20 07:44:00.722701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-11-20 07:44:00.732621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.684 [2024-11-20 07:44:00.732698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.684 [2024-11-20 07:44:00.732719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.684 [2024-11-20 07:44:00.732726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.684 [2024-11-20 07:44:00.732733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.684 [2024-11-20 07:44:00.732761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-11-20 07:44:00.742683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.684 [2024-11-20 07:44:00.742798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.684 [2024-11-20 07:44:00.742817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.684 [2024-11-20 07:44:00.742825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.684 [2024-11-20 07:44:00.742831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.684 [2024-11-20 07:44:00.742849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-11-20 07:44:00.752707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.684 [2024-11-20 07:44:00.752769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.684 [2024-11-20 07:44:00.752787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.684 [2024-11-20 07:44:00.752794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.684 [2024-11-20 07:44:00.752800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.684 [2024-11-20 07:44:00.752817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-11-20 07:44:00.762698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.684 [2024-11-20 07:44:00.762761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.684 [2024-11-20 07:44:00.762778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.684 [2024-11-20 07:44:00.762786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.684 [2024-11-20 07:44:00.762792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.684 [2024-11-20 07:44:00.762809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-11-20 07:44:00.772625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.684 [2024-11-20 07:44:00.772700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.684 [2024-11-20 07:44:00.772728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.684 [2024-11-20 07:44:00.772735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.684 [2024-11-20 07:44:00.772742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.684 [2024-11-20 07:44:00.772767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-11-20 07:44:00.782864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.684 [2024-11-20 07:44:00.782961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.684 [2024-11-20 07:44:00.782978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.684 [2024-11-20 07:44:00.782985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.684 [2024-11-20 07:44:00.782992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.684 [2024-11-20 07:44:00.783009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-11-20 07:44:00.792809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.684 [2024-11-20 07:44:00.792887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.684 [2024-11-20 07:44:00.792908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.684 [2024-11-20 07:44:00.792916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.684 [2024-11-20 07:44:00.792922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.684 [2024-11-20 07:44:00.792940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-11-20 07:44:00.802834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.684 [2024-11-20 07:44:00.802901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.684 [2024-11-20 07:44:00.802919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.684 [2024-11-20 07:44:00.802926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.684 [2024-11-20 07:44:00.802933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.684 [2024-11-20 07:44:00.802950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-11-20 07:44:00.812932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.684 [2024-11-20 07:44:00.813005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.684 [2024-11-20 07:44:00.813021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.684 [2024-11-20 07:44:00.813034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.684 [2024-11-20 07:44:00.813040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.684 [2024-11-20 07:44:00.813057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-11-20 07:44:00.822944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.684 [2024-11-20 07:44:00.823020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.684 [2024-11-20 07:44:00.823037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.684 [2024-11-20 07:44:00.823044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.684 [2024-11-20 07:44:00.823050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.684 [2024-11-20 07:44:00.823068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-11-20 07:44:00.832980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.684 [2024-11-20 07:44:00.833055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.684 [2024-11-20 07:44:00.833071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.684 [2024-11-20 07:44:00.833079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.684 [2024-11-20 07:44:00.833085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.684 [2024-11-20 07:44:00.833102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-11-20 07:44:00.842965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.684 [2024-11-20 07:44:00.843026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.684 [2024-11-20 07:44:00.843042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.684 [2024-11-20 07:44:00.843049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.684 [2024-11-20 07:44:00.843056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.684 [2024-11-20 07:44:00.843072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-11-20 07:44:00.852991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.684 [2024-11-20 07:44:00.853056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.684 [2024-11-20 07:44:00.853072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.684 [2024-11-20 07:44:00.853079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.685 [2024-11-20 07:44:00.853086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.685 [2024-11-20 07:44:00.853103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-11-20 07:44:00.863076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.685 [2024-11-20 07:44:00.863142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.685 [2024-11-20 07:44:00.863159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.685 [2024-11-20 07:44:00.863166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.685 [2024-11-20 07:44:00.863173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.685 [2024-11-20 07:44:00.863190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-11-20 07:44:00.873128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.685 [2024-11-20 07:44:00.873194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.685 [2024-11-20 07:44:00.873210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.685 [2024-11-20 07:44:00.873217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.685 [2024-11-20 07:44:00.873224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.685 [2024-11-20 07:44:00.873240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-11-20 07:44:00.883083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.685 [2024-11-20 07:44:00.883150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.685 [2024-11-20 07:44:00.883168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.685 [2024-11-20 07:44:00.883175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.685 [2024-11-20 07:44:00.883184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.685 [2024-11-20 07:44:00.883201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-20 07:44:00.893131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-20 07:44:00.893202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-20 07:44:00.893218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-20 07:44:00.893226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-20 07:44:00.893233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.947 [2024-11-20 07:44:00.893249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-20 07:44:00.903068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-20 07:44:00.903143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-20 07:44:00.903160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-20 07:44:00.903167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-20 07:44:00.903174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.947 [2024-11-20 07:44:00.903189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-20 07:44:00.913195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-20 07:44:00.913266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-20 07:44:00.913282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-20 07:44:00.913290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-20 07:44:00.913296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.947 [2024-11-20 07:44:00.913312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-20 07:44:00.923206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-20 07:44:00.923269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-20 07:44:00.923286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-20 07:44:00.923293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-20 07:44:00.923299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.947 [2024-11-20 07:44:00.923316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-20 07:44:00.933275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-20 07:44:00.933341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-20 07:44:00.933357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-20 07:44:00.933365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-20 07:44:00.933371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.947 [2024-11-20 07:44:00.933387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-20 07:44:00.943318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-20 07:44:00.943396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-20 07:44:00.943412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-20 07:44:00.943425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-20 07:44:00.943431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.947 [2024-11-20 07:44:00.943447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-20 07:44:00.953302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-20 07:44:00.953388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-20 07:44:00.953406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-20 07:44:00.953413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-20 07:44:00.953420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.947 [2024-11-20 07:44:00.953436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-20 07:44:00.963295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-20 07:44:00.963358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-20 07:44:00.963375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-20 07:44:00.963382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-20 07:44:00.963389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.947 [2024-11-20 07:44:00.963406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-20 07:44:00.973380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-20 07:44:00.973444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-20 07:44:00.973460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-20 07:44:00.973467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-20 07:44:00.973474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.947 [2024-11-20 07:44:00.973491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-20 07:44:00.983430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-20 07:44:00.983508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-20 07:44:00.983523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-20 07:44:00.983531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-20 07:44:00.983537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.947 [2024-11-20 07:44:00.983558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-20 07:44:00.993398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-20 07:44:00.993459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-20 07:44:00.993476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-20 07:44:00.993483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-20 07:44:00.993490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.947 [2024-11-20 07:44:00.993506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.947 qpair failed and we were unable to recover it. 00:29:42.947 [2024-11-20 07:44:01.003458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.947 [2024-11-20 07:44:01.003519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.947 [2024-11-20 07:44:01.003535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.947 [2024-11-20 07:44:01.003543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.947 [2024-11-20 07:44:01.003549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.947 [2024-11-20 07:44:01.003566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.948 qpair failed and we were unable to recover it. 00:29:42.948 [2024-11-20 07:44:01.013498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.948 [2024-11-20 07:44:01.013563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.948 [2024-11-20 07:44:01.013579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.948 [2024-11-20 07:44:01.013586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.948 [2024-11-20 07:44:01.013592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.948 [2024-11-20 07:44:01.013608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.948 qpair failed and we were unable to recover it. 00:29:42.948 [2024-11-20 07:44:01.023639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.948 [2024-11-20 07:44:01.023736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.948 [2024-11-20 07:44:01.023759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.948 [2024-11-20 07:44:01.023766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.948 [2024-11-20 07:44:01.023772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.948 [2024-11-20 07:44:01.023790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.948 qpair failed and we were unable to recover it. 00:29:42.948 [2024-11-20 07:44:01.033548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.948 [2024-11-20 07:44:01.033615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.948 [2024-11-20 07:44:01.033631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.948 [2024-11-20 07:44:01.033638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.948 [2024-11-20 07:44:01.033645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.948 [2024-11-20 07:44:01.033661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.948 qpair failed and we were unable to recover it. 00:29:42.948 [2024-11-20 07:44:01.043602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.948 [2024-11-20 07:44:01.043712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.948 [2024-11-20 07:44:01.043728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.948 [2024-11-20 07:44:01.043736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.948 [2024-11-20 07:44:01.043742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.948 [2024-11-20 07:44:01.043766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.948 qpair failed and we were unable to recover it. 00:29:42.948 [2024-11-20 07:44:01.053622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.948 [2024-11-20 07:44:01.053694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.948 [2024-11-20 07:44:01.053711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.948 [2024-11-20 07:44:01.053718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.948 [2024-11-20 07:44:01.053725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.948 [2024-11-20 07:44:01.053742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.948 qpair failed and we were unable to recover it. 00:29:42.948 [2024-11-20 07:44:01.063676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.948 [2024-11-20 07:44:01.063752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.948 [2024-11-20 07:44:01.063769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.948 [2024-11-20 07:44:01.063777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.948 [2024-11-20 07:44:01.063783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.948 [2024-11-20 07:44:01.063800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.948 qpair failed and we were unable to recover it. 00:29:42.948 [2024-11-20 07:44:01.073708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.948 [2024-11-20 07:44:01.073813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.948 [2024-11-20 07:44:01.073836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.948 [2024-11-20 07:44:01.073843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.948 [2024-11-20 07:44:01.073850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.948 [2024-11-20 07:44:01.073866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.948 qpair failed and we were unable to recover it. 00:29:42.948 [2024-11-20 07:44:01.083728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.948 [2024-11-20 07:44:01.083806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.948 [2024-11-20 07:44:01.083823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.948 [2024-11-20 07:44:01.083830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.948 [2024-11-20 07:44:01.083836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.948 [2024-11-20 07:44:01.083853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.948 qpair failed and we were unable to recover it. 00:29:42.948 [2024-11-20 07:44:01.093817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.948 [2024-11-20 07:44:01.093891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.948 [2024-11-20 07:44:01.093907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.948 [2024-11-20 07:44:01.093914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.948 [2024-11-20 07:44:01.093920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.948 [2024-11-20 07:44:01.093936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.948 qpair failed and we were unable to recover it. 00:29:42.948 [2024-11-20 07:44:01.103832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.948 [2024-11-20 07:44:01.103908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.948 [2024-11-20 07:44:01.103925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.948 [2024-11-20 07:44:01.103933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.948 [2024-11-20 07:44:01.103939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.948 [2024-11-20 07:44:01.103956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.948 qpair failed and we were unable to recover it. 00:29:42.948 [2024-11-20 07:44:01.113841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.948 [2024-11-20 07:44:01.113902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.948 [2024-11-20 07:44:01.113918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.948 [2024-11-20 07:44:01.113926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.948 [2024-11-20 07:44:01.113937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.948 [2024-11-20 07:44:01.113954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.948 qpair failed and we were unable to recover it. 00:29:42.948 [2024-11-20 07:44:01.123841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.948 [2024-11-20 07:44:01.123925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.948 [2024-11-20 07:44:01.123941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.948 [2024-11-20 07:44:01.123948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.948 [2024-11-20 07:44:01.123954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.948 [2024-11-20 07:44:01.123971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.948 qpair failed and we were unable to recover it. 00:29:42.948 [2024-11-20 07:44:01.133776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.948 [2024-11-20 07:44:01.133839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.948 [2024-11-20 07:44:01.133856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.948 [2024-11-20 07:44:01.133863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.948 [2024-11-20 07:44:01.133870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.948 [2024-11-20 07:44:01.133886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.948 qpair failed and we were unable to recover it. 00:29:42.949 [2024-11-20 07:44:01.143949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.949 [2024-11-20 07:44:01.144021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.949 [2024-11-20 07:44:01.144037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.949 [2024-11-20 07:44:01.144044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.949 [2024-11-20 07:44:01.144051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:42.949 [2024-11-20 07:44:01.144067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.949 qpair failed and we were unable to recover it. 00:29:43.211 [2024-11-20 07:44:01.153960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.211 [2024-11-20 07:44:01.154029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.211 [2024-11-20 07:44:01.154045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.211 [2024-11-20 07:44:01.154052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.211 [2024-11-20 07:44:01.154059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.211 [2024-11-20 07:44:01.154075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.211 qpair failed and we were unable to recover it. 00:29:43.211 [2024-11-20 07:44:01.163868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.211 [2024-11-20 07:44:01.163928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.211 [2024-11-20 07:44:01.163945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.211 [2024-11-20 07:44:01.163952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.211 [2024-11-20 07:44:01.163958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.211 [2024-11-20 07:44:01.163975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.211 qpair failed and we were unable to recover it. 00:29:43.211 [2024-11-20 07:44:01.174024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.211 [2024-11-20 07:44:01.174134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.211 [2024-11-20 07:44:01.174150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.211 [2024-11-20 07:44:01.174158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.211 [2024-11-20 07:44:01.174165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.211 [2024-11-20 07:44:01.174181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.211 qpair failed and we were unable to recover it. 00:29:43.211 [2024-11-20 07:44:01.184100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.211 [2024-11-20 07:44:01.184180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.211 [2024-11-20 07:44:01.184196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.211 [2024-11-20 07:44:01.184203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.211 [2024-11-20 07:44:01.184210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.211 [2024-11-20 07:44:01.184226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.211 qpair failed and we were unable to recover it. 00:29:43.211 [2024-11-20 07:44:01.194051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.211 [2024-11-20 07:44:01.194111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.211 [2024-11-20 07:44:01.194128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.211 [2024-11-20 07:44:01.194135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.211 [2024-11-20 07:44:01.194142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.211 [2024-11-20 07:44:01.194158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.211 qpair failed and we were unable to recover it. 00:29:43.211 [2024-11-20 07:44:01.204051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.211 [2024-11-20 07:44:01.204116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.211 [2024-11-20 07:44:01.204137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.211 [2024-11-20 07:44:01.204145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.211 [2024-11-20 07:44:01.204151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.211 [2024-11-20 07:44:01.204167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.211 qpair failed and we were unable to recover it. 00:29:43.212 [2024-11-20 07:44:01.214146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.212 [2024-11-20 07:44:01.214213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.212 [2024-11-20 07:44:01.214232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.212 [2024-11-20 07:44:01.214239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.212 [2024-11-20 07:44:01.214247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.212 [2024-11-20 07:44:01.214264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.212 qpair failed and we were unable to recover it. 00:29:43.212 [2024-11-20 07:44:01.224210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.212 [2024-11-20 07:44:01.224273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.212 [2024-11-20 07:44:01.224290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.212 [2024-11-20 07:44:01.224297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.212 [2024-11-20 07:44:01.224303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.212 [2024-11-20 07:44:01.224320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.212 qpair failed and we were unable to recover it. 00:29:43.212 [2024-11-20 07:44:01.234225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.212 [2024-11-20 07:44:01.234294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.212 [2024-11-20 07:44:01.234310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.212 [2024-11-20 07:44:01.234318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.212 [2024-11-20 07:44:01.234325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.212 [2024-11-20 07:44:01.234341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.212 qpair failed and we were unable to recover it. 00:29:43.212 [2024-11-20 07:44:01.244092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.212 [2024-11-20 07:44:01.244179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.212 [2024-11-20 07:44:01.244195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.212 [2024-11-20 07:44:01.244203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.212 [2024-11-20 07:44:01.244216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.212 [2024-11-20 07:44:01.244232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.212 qpair failed and we were unable to recover it. 00:29:43.212 [2024-11-20 07:44:01.254288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.212 [2024-11-20 07:44:01.254358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.212 [2024-11-20 07:44:01.254373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.212 [2024-11-20 07:44:01.254380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.212 [2024-11-20 07:44:01.254387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.212 [2024-11-20 07:44:01.254403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.212 qpair failed and we were unable to recover it. 00:29:43.212 [2024-11-20 07:44:01.264331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.212 [2024-11-20 07:44:01.264399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.212 [2024-11-20 07:44:01.264417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.212 [2024-11-20 07:44:01.264424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.212 [2024-11-20 07:44:01.264431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.212 [2024-11-20 07:44:01.264448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.212 qpair failed and we were unable to recover it. 00:29:43.212 [2024-11-20 07:44:01.274195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.212 [2024-11-20 07:44:01.274257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.212 [2024-11-20 07:44:01.274274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.212 [2024-11-20 07:44:01.274280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.212 [2024-11-20 07:44:01.274287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.212 [2024-11-20 07:44:01.274303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.212 qpair failed and we were unable to recover it. 00:29:43.212 [2024-11-20 07:44:01.284270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.212 [2024-11-20 07:44:01.284370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.212 [2024-11-20 07:44:01.284386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.212 [2024-11-20 07:44:01.284393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.212 [2024-11-20 07:44:01.284400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.212 [2024-11-20 07:44:01.284416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.212 qpair failed and we were unable to recover it. 00:29:43.212 [2024-11-20 07:44:01.294248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.212 [2024-11-20 07:44:01.294316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.212 [2024-11-20 07:44:01.294333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.212 [2024-11-20 07:44:01.294340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.212 [2024-11-20 07:44:01.294346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.212 [2024-11-20 07:44:01.294363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.212 qpair failed and we were unable to recover it. 00:29:43.212 [2024-11-20 07:44:01.304463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.212 [2024-11-20 07:44:01.304574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.212 [2024-11-20 07:44:01.304591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.212 [2024-11-20 07:44:01.304598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.212 [2024-11-20 07:44:01.304605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.212 [2024-11-20 07:44:01.304622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.212 qpair failed and we were unable to recover it. 00:29:43.212 [2024-11-20 07:44:01.314452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.212 [2024-11-20 07:44:01.314512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.212 [2024-11-20 07:44:01.314528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.212 [2024-11-20 07:44:01.314537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.212 [2024-11-20 07:44:01.314543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.212 [2024-11-20 07:44:01.314561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.212 qpair failed and we were unable to recover it. 00:29:43.212 [2024-11-20 07:44:01.324476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.212 [2024-11-20 07:44:01.324578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.212 [2024-11-20 07:44:01.324594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.212 [2024-11-20 07:44:01.324602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.212 [2024-11-20 07:44:01.324608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.212 [2024-11-20 07:44:01.324625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.212 qpair failed and we were unable to recover it. 00:29:43.212 [2024-11-20 07:44:01.334373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.212 [2024-11-20 07:44:01.334438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.212 [2024-11-20 07:44:01.334460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.212 [2024-11-20 07:44:01.334467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.212 [2024-11-20 07:44:01.334473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.212 [2024-11-20 07:44:01.334489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.213 qpair failed and we were unable to recover it. 00:29:43.213 [2024-11-20 07:44:01.344564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.213 [2024-11-20 07:44:01.344645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.213 [2024-11-20 07:44:01.344662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.213 [2024-11-20 07:44:01.344669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.213 [2024-11-20 07:44:01.344675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.213 [2024-11-20 07:44:01.344692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.213 qpair failed and we were unable to recover it. 00:29:43.213 [2024-11-20 07:44:01.354576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.213 [2024-11-20 07:44:01.354646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.213 [2024-11-20 07:44:01.354662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.213 [2024-11-20 07:44:01.354669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.213 [2024-11-20 07:44:01.354676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.213 [2024-11-20 07:44:01.354692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.213 qpair failed and we were unable to recover it. 00:29:43.213 [2024-11-20 07:44:01.364597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.213 [2024-11-20 07:44:01.364660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.213 [2024-11-20 07:44:01.364676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.213 [2024-11-20 07:44:01.364684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.213 [2024-11-20 07:44:01.364690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.213 [2024-11-20 07:44:01.364707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.213 qpair failed and we were unable to recover it. 00:29:43.213 [2024-11-20 07:44:01.374660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.213 [2024-11-20 07:44:01.374722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.213 [2024-11-20 07:44:01.374738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.213 [2024-11-20 07:44:01.374756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.213 [2024-11-20 07:44:01.374762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.213 [2024-11-20 07:44:01.374780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.213 qpair failed and we were unable to recover it. 00:29:43.213 [2024-11-20 07:44:01.384557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.213 [2024-11-20 07:44:01.384665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.213 [2024-11-20 07:44:01.384681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.213 [2024-11-20 07:44:01.384689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.213 [2024-11-20 07:44:01.384695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.213 [2024-11-20 07:44:01.384711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.213 qpair failed and we were unable to recover it. 00:29:43.213 [2024-11-20 07:44:01.394705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.213 [2024-11-20 07:44:01.394769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.213 [2024-11-20 07:44:01.394784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.213 [2024-11-20 07:44:01.394792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.213 [2024-11-20 07:44:01.394798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.213 [2024-11-20 07:44:01.394815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.213 qpair failed and we were unable to recover it. 00:29:43.213 [2024-11-20 07:44:01.404724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.213 [2024-11-20 07:44:01.404789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.213 [2024-11-20 07:44:01.404806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.213 [2024-11-20 07:44:01.404813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.213 [2024-11-20 07:44:01.404820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.213 [2024-11-20 07:44:01.404836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.213 qpair failed and we were unable to recover it. 00:29:43.213 [2024-11-20 07:44:01.414634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.213 [2024-11-20 07:44:01.414702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.213 [2024-11-20 07:44:01.414719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.213 [2024-11-20 07:44:01.414726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.213 [2024-11-20 07:44:01.414732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.213 [2024-11-20 07:44:01.414755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.213 qpair failed and we were unable to recover it. 00:29:43.475 [2024-11-20 07:44:01.424713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.475 [2024-11-20 07:44:01.424791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.475 [2024-11-20 07:44:01.424808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.475 [2024-11-20 07:44:01.424815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.475 [2024-11-20 07:44:01.424822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.475 [2024-11-20 07:44:01.424838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.475 qpair failed and we were unable to recover it. 00:29:43.475 [2024-11-20 07:44:01.434835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.475 [2024-11-20 07:44:01.434916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.475 [2024-11-20 07:44:01.434932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.475 [2024-11-20 07:44:01.434940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.475 [2024-11-20 07:44:01.434946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.475 [2024-11-20 07:44:01.434962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.475 qpair failed and we were unable to recover it. 00:29:43.475 [2024-11-20 07:44:01.444853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.475 [2024-11-20 07:44:01.444965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.475 [2024-11-20 07:44:01.444980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.475 [2024-11-20 07:44:01.444988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.475 [2024-11-20 07:44:01.444994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.475 [2024-11-20 07:44:01.445010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.475 qpair failed and we were unable to recover it. 00:29:43.475 [2024-11-20 07:44:01.454901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.475 [2024-11-20 07:44:01.454972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.475 [2024-11-20 07:44:01.454992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.475 [2024-11-20 07:44:01.455000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.475 [2024-11-20 07:44:01.455010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.475 [2024-11-20 07:44:01.455028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.475 qpair failed and we were unable to recover it. 00:29:43.475 [2024-11-20 07:44:01.464957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.475 [2024-11-20 07:44:01.465046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.475 [2024-11-20 07:44:01.465065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.475 [2024-11-20 07:44:01.465073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.475 [2024-11-20 07:44:01.465079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.475 [2024-11-20 07:44:01.465097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.475 qpair failed and we were unable to recover it. 00:29:43.475 [2024-11-20 07:44:01.474970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.475 [2024-11-20 07:44:01.475035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.475 [2024-11-20 07:44:01.475052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.475 [2024-11-20 07:44:01.475059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.475 [2024-11-20 07:44:01.475066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.475 [2024-11-20 07:44:01.475082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.475 qpair failed and we were unable to recover it. 00:29:43.475 [2024-11-20 07:44:01.484960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.475 [2024-11-20 07:44:01.485019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.475 [2024-11-20 07:44:01.485036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.475 [2024-11-20 07:44:01.485043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.475 [2024-11-20 07:44:01.485049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.475 [2024-11-20 07:44:01.485066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.475 qpair failed and we were unable to recover it. 00:29:43.475 [2024-11-20 07:44:01.495026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.475 [2024-11-20 07:44:01.495094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.475 [2024-11-20 07:44:01.495110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.475 [2024-11-20 07:44:01.495117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.475 [2024-11-20 07:44:01.495124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.475 [2024-11-20 07:44:01.495140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.475 qpair failed and we were unable to recover it. 00:29:43.475 [2024-11-20 07:44:01.505099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.475 [2024-11-20 07:44:01.505193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.475 [2024-11-20 07:44:01.505209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.475 [2024-11-20 07:44:01.505230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.476 [2024-11-20 07:44:01.505236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.476 [2024-11-20 07:44:01.505253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.476 qpair failed and we were unable to recover it. 00:29:43.476 [2024-11-20 07:44:01.515078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.476 [2024-11-20 07:44:01.515145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.476 [2024-11-20 07:44:01.515164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.476 [2024-11-20 07:44:01.515172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.476 [2024-11-20 07:44:01.515180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.476 [2024-11-20 07:44:01.515199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.476 qpair failed and we were unable to recover it. 00:29:43.476 [2024-11-20 07:44:01.525078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.476 [2024-11-20 07:44:01.525134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.476 [2024-11-20 07:44:01.525151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.476 [2024-11-20 07:44:01.525158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.476 [2024-11-20 07:44:01.525164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.476 [2024-11-20 07:44:01.525181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.476 qpair failed and we were unable to recover it. 00:29:43.476 [2024-11-20 07:44:01.535140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.476 [2024-11-20 07:44:01.535208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.476 [2024-11-20 07:44:01.535223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.476 [2024-11-20 07:44:01.535230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.476 [2024-11-20 07:44:01.535237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.476 [2024-11-20 07:44:01.535254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.476 qpair failed and we were unable to recover it. 00:29:43.476 [2024-11-20 07:44:01.545105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.476 [2024-11-20 07:44:01.545177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.476 [2024-11-20 07:44:01.545192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.476 [2024-11-20 07:44:01.545200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.476 [2024-11-20 07:44:01.545207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.476 [2024-11-20 07:44:01.545229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.476 qpair failed and we were unable to recover it. 00:29:43.476 [2024-11-20 07:44:01.555216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.476 [2024-11-20 07:44:01.555278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.476 [2024-11-20 07:44:01.555293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.476 [2024-11-20 07:44:01.555300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.476 [2024-11-20 07:44:01.555307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.476 [2024-11-20 07:44:01.555323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.476 qpair failed and we were unable to recover it. 00:29:43.476 [2024-11-20 07:44:01.565215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.476 [2024-11-20 07:44:01.565292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.476 [2024-11-20 07:44:01.565308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.476 [2024-11-20 07:44:01.565315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.476 [2024-11-20 07:44:01.565321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.476 [2024-11-20 07:44:01.565337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.476 qpair failed and we were unable to recover it. 00:29:43.476 [2024-11-20 07:44:01.575264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.476 [2024-11-20 07:44:01.575338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.476 [2024-11-20 07:44:01.575358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.476 [2024-11-20 07:44:01.575365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.476 [2024-11-20 07:44:01.575376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.476 [2024-11-20 07:44:01.575395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.476 qpair failed and we were unable to recover it. 00:29:43.476 [2024-11-20 07:44:01.585318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.476 [2024-11-20 07:44:01.585414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.476 [2024-11-20 07:44:01.585432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.476 [2024-11-20 07:44:01.585440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.476 [2024-11-20 07:44:01.585446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.476 [2024-11-20 07:44:01.585464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.476 qpair failed and we were unable to recover it. 00:29:43.476 [2024-11-20 07:44:01.595379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.476 [2024-11-20 07:44:01.595443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.476 [2024-11-20 07:44:01.595460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.476 [2024-11-20 07:44:01.595467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.476 [2024-11-20 07:44:01.595474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.476 [2024-11-20 07:44:01.595490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.476 qpair failed and we were unable to recover it. 00:29:43.476 [2024-11-20 07:44:01.605333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.476 [2024-11-20 07:44:01.605397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.476 [2024-11-20 07:44:01.605414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.476 [2024-11-20 07:44:01.605422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.476 [2024-11-20 07:44:01.605428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.476 [2024-11-20 07:44:01.605445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.476 qpair failed and we were unable to recover it. 00:29:43.476 [2024-11-20 07:44:01.615349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.476 [2024-11-20 07:44:01.615457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.476 [2024-11-20 07:44:01.615473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.476 [2024-11-20 07:44:01.615481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.476 [2024-11-20 07:44:01.615487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.476 [2024-11-20 07:44:01.615503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.476 qpair failed and we were unable to recover it. 00:29:43.476 [2024-11-20 07:44:01.625455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.476 [2024-11-20 07:44:01.625522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.476 [2024-11-20 07:44:01.625538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.476 [2024-11-20 07:44:01.625546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.476 [2024-11-20 07:44:01.625553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.476 [2024-11-20 07:44:01.625569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.476 qpair failed and we were unable to recover it. 00:29:43.476 [2024-11-20 07:44:01.635445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.476 [2024-11-20 07:44:01.635503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.477 [2024-11-20 07:44:01.635525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.477 [2024-11-20 07:44:01.635533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.477 [2024-11-20 07:44:01.635539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.477 [2024-11-20 07:44:01.635556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.477 qpair failed and we were unable to recover it. 00:29:43.477 [2024-11-20 07:44:01.645474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.477 [2024-11-20 07:44:01.645530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.477 [2024-11-20 07:44:01.645547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.477 [2024-11-20 07:44:01.645554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.477 [2024-11-20 07:44:01.645561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.477 [2024-11-20 07:44:01.645577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.477 qpair failed and we were unable to recover it. 00:29:43.477 [2024-11-20 07:44:01.655489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.477 [2024-11-20 07:44:01.655588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.477 [2024-11-20 07:44:01.655604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.477 [2024-11-20 07:44:01.655611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.477 [2024-11-20 07:44:01.655617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.477 [2024-11-20 07:44:01.655633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.477 qpair failed and we were unable to recover it. 00:29:43.477 [2024-11-20 07:44:01.665554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.477 [2024-11-20 07:44:01.665630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.477 [2024-11-20 07:44:01.665647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.477 [2024-11-20 07:44:01.665654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.477 [2024-11-20 07:44:01.665661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.477 [2024-11-20 07:44:01.665677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.477 qpair failed and we were unable to recover it. 00:29:43.477 [2024-11-20 07:44:01.675542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.477 [2024-11-20 07:44:01.675609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.477 [2024-11-20 07:44:01.675624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.477 [2024-11-20 07:44:01.675631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.477 [2024-11-20 07:44:01.675643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.477 [2024-11-20 07:44:01.675660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.477 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-20 07:44:01.685575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.739 [2024-11-20 07:44:01.685638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.739 [2024-11-20 07:44:01.685654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.739 [2024-11-20 07:44:01.685661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.739 [2024-11-20 07:44:01.685667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.739 [2024-11-20 07:44:01.685683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-20 07:44:01.695600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.739 [2024-11-20 07:44:01.695701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.739 [2024-11-20 07:44:01.695717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.739 [2024-11-20 07:44:01.695724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.739 [2024-11-20 07:44:01.695731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.739 [2024-11-20 07:44:01.695753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-20 07:44:01.705677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.739 [2024-11-20 07:44:01.705742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.739 [2024-11-20 07:44:01.705764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.739 [2024-11-20 07:44:01.705771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.739 [2024-11-20 07:44:01.705778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.739 [2024-11-20 07:44:01.705794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-20 07:44:01.715648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.739 [2024-11-20 07:44:01.715709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.739 [2024-11-20 07:44:01.715725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.740 [2024-11-20 07:44:01.715732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.740 [2024-11-20 07:44:01.715738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.740 [2024-11-20 07:44:01.715761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-20 07:44:01.725545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.740 [2024-11-20 07:44:01.725611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.740 [2024-11-20 07:44:01.725626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.740 [2024-11-20 07:44:01.725634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.740 [2024-11-20 07:44:01.725640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.740 [2024-11-20 07:44:01.725656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-20 07:44:01.735611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.740 [2024-11-20 07:44:01.735681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.740 [2024-11-20 07:44:01.735699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.740 [2024-11-20 07:44:01.735707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.740 [2024-11-20 07:44:01.735716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.740 [2024-11-20 07:44:01.735734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-20 07:44:01.745795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.740 [2024-11-20 07:44:01.745859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.740 [2024-11-20 07:44:01.745877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.740 [2024-11-20 07:44:01.745885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.740 [2024-11-20 07:44:01.745891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.740 [2024-11-20 07:44:01.745908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-20 07:44:01.755796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.740 [2024-11-20 07:44:01.755866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.740 [2024-11-20 07:44:01.755882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.740 [2024-11-20 07:44:01.755890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.740 [2024-11-20 07:44:01.755896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.740 [2024-11-20 07:44:01.755913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-20 07:44:01.765691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.740 [2024-11-20 07:44:01.765761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.740 [2024-11-20 07:44:01.765783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.740 [2024-11-20 07:44:01.765791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.740 [2024-11-20 07:44:01.765798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.740 [2024-11-20 07:44:01.765815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-20 07:44:01.775884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.740 [2024-11-20 07:44:01.775948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.740 [2024-11-20 07:44:01.775965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.740 [2024-11-20 07:44:01.775973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.740 [2024-11-20 07:44:01.775980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.740 [2024-11-20 07:44:01.775996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-20 07:44:01.785931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.740 [2024-11-20 07:44:01.786005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.740 [2024-11-20 07:44:01.786022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.740 [2024-11-20 07:44:01.786030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.740 [2024-11-20 07:44:01.786037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.740 [2024-11-20 07:44:01.786054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-20 07:44:01.795896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.740 [2024-11-20 07:44:01.795961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.740 [2024-11-20 07:44:01.795978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.740 [2024-11-20 07:44:01.795986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.740 [2024-11-20 07:44:01.795993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.740 [2024-11-20 07:44:01.796010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-20 07:44:01.805799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.740 [2024-11-20 07:44:01.805864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.740 [2024-11-20 07:44:01.805880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.740 [2024-11-20 07:44:01.805887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.740 [2024-11-20 07:44:01.805899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.740 [2024-11-20 07:44:01.805916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-20 07:44:01.815985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.740 [2024-11-20 07:44:01.816054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.740 [2024-11-20 07:44:01.816070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.740 [2024-11-20 07:44:01.816077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.740 [2024-11-20 07:44:01.816084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.740 [2024-11-20 07:44:01.816099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-20 07:44:01.826019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.740 [2024-11-20 07:44:01.826095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.740 [2024-11-20 07:44:01.826111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.740 [2024-11-20 07:44:01.826119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.740 [2024-11-20 07:44:01.826126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.740 [2024-11-20 07:44:01.826142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-20 07:44:01.836027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.740 [2024-11-20 07:44:01.836094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.740 [2024-11-20 07:44:01.836110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.740 [2024-11-20 07:44:01.836117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.740 [2024-11-20 07:44:01.836123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.740 [2024-11-20 07:44:01.836140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-20 07:44:01.846069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.740 [2024-11-20 07:44:01.846133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.740 [2024-11-20 07:44:01.846148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.741 [2024-11-20 07:44:01.846156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.741 [2024-11-20 07:44:01.846163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.741 [2024-11-20 07:44:01.846179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-20 07:44:01.856105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.741 [2024-11-20 07:44:01.856173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.741 [2024-11-20 07:44:01.856189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.741 [2024-11-20 07:44:01.856197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.741 [2024-11-20 07:44:01.856203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.741 [2024-11-20 07:44:01.856220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-20 07:44:01.866143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.741 [2024-11-20 07:44:01.866219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.741 [2024-11-20 07:44:01.866235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.741 [2024-11-20 07:44:01.866243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.741 [2024-11-20 07:44:01.866250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.741 [2024-11-20 07:44:01.866266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-20 07:44:01.876139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.741 [2024-11-20 07:44:01.876199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.741 [2024-11-20 07:44:01.876215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.741 [2024-11-20 07:44:01.876222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.741 [2024-11-20 07:44:01.876229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.741 [2024-11-20 07:44:01.876246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-20 07:44:01.886230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.741 [2024-11-20 07:44:01.886298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.741 [2024-11-20 07:44:01.886314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.741 [2024-11-20 07:44:01.886321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.741 [2024-11-20 07:44:01.886328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.741 [2024-11-20 07:44:01.886344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-20 07:44:01.896237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.741 [2024-11-20 07:44:01.896306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.741 [2024-11-20 07:44:01.896326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.741 [2024-11-20 07:44:01.896334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.741 [2024-11-20 07:44:01.896340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.741 [2024-11-20 07:44:01.896356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-20 07:44:01.906292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.741 [2024-11-20 07:44:01.906374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.741 [2024-11-20 07:44:01.906390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.741 [2024-11-20 07:44:01.906397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.741 [2024-11-20 07:44:01.906404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.741 [2024-11-20 07:44:01.906419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-20 07:44:01.916286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.741 [2024-11-20 07:44:01.916351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.741 [2024-11-20 07:44:01.916367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.741 [2024-11-20 07:44:01.916374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.741 [2024-11-20 07:44:01.916381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.741 [2024-11-20 07:44:01.916397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-20 07:44:01.926311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.741 [2024-11-20 07:44:01.926384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.741 [2024-11-20 07:44:01.926400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.741 [2024-11-20 07:44:01.926407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.741 [2024-11-20 07:44:01.926414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.741 [2024-11-20 07:44:01.926430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-20 07:44:01.936342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.741 [2024-11-20 07:44:01.936406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.741 [2024-11-20 07:44:01.936422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.741 [2024-11-20 07:44:01.936435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.741 [2024-11-20 07:44:01.936442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:43.741 [2024-11-20 07:44:01.936458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.741 qpair failed and we were unable to recover it. 00:29:44.003 [2024-11-20 07:44:01.946386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.003 [2024-11-20 07:44:01.946460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.003 [2024-11-20 07:44:01.946476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.003 [2024-11-20 07:44:01.946484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.003 [2024-11-20 07:44:01.946490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.003 [2024-11-20 07:44:01.946506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.003 qpair failed and we were unable to recover it. 00:29:44.003 [2024-11-20 07:44:01.956386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.003 [2024-11-20 07:44:01.956455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.003 [2024-11-20 07:44:01.956489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.003 [2024-11-20 07:44:01.956498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.003 [2024-11-20 07:44:01.956505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.003 [2024-11-20 07:44:01.956528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.003 qpair failed and we were unable to recover it. 00:29:44.003 [2024-11-20 07:44:01.966338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.003 [2024-11-20 07:44:01.966403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.003 [2024-11-20 07:44:01.966437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.003 [2024-11-20 07:44:01.966445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.003 [2024-11-20 07:44:01.966453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.003 [2024-11-20 07:44:01.966476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.003 qpair failed and we were unable to recover it. 00:29:44.003 [2024-11-20 07:44:01.976431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.003 [2024-11-20 07:44:01.976502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.003 [2024-11-20 07:44:01.976535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.003 [2024-11-20 07:44:01.976545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.003 [2024-11-20 07:44:01.976554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.003 [2024-11-20 07:44:01.976585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.003 qpair failed and we were unable to recover it. 00:29:44.003 [2024-11-20 07:44:01.986478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.003 [2024-11-20 07:44:01.986542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.003 [2024-11-20 07:44:01.986574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.003 [2024-11-20 07:44:01.986584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.003 [2024-11-20 07:44:01.986591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.003 [2024-11-20 07:44:01.986615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.003 qpair failed and we were unable to recover it. 00:29:44.003 [2024-11-20 07:44:01.996482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.003 [2024-11-20 07:44:01.996577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.003 [2024-11-20 07:44:01.996596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.003 [2024-11-20 07:44:01.996604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.003 [2024-11-20 07:44:01.996611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.003 [2024-11-20 07:44:01.996628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.003 qpair failed and we were unable to recover it. 00:29:44.003 [2024-11-20 07:44:02.006449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.003 [2024-11-20 07:44:02.006528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.003 [2024-11-20 07:44:02.006544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.003 [2024-11-20 07:44:02.006551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.003 [2024-11-20 07:44:02.006558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.003 [2024-11-20 07:44:02.006574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.003 qpair failed and we were unable to recover it. 00:29:44.003 [2024-11-20 07:44:02.016483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.003 [2024-11-20 07:44:02.016535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.003 [2024-11-20 07:44:02.016549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.003 [2024-11-20 07:44:02.016556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.003 [2024-11-20 07:44:02.016562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.003 [2024-11-20 07:44:02.016578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.003 qpair failed and we were unable to recover it. 00:29:44.003 [2024-11-20 07:44:02.026581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.003 [2024-11-20 07:44:02.026650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.003 [2024-11-20 07:44:02.026665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.003 [2024-11-20 07:44:02.026672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.003 [2024-11-20 07:44:02.026678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.003 [2024-11-20 07:44:02.026693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.003 qpair failed and we were unable to recover it. 00:29:44.003 [2024-11-20 07:44:02.036584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.003 [2024-11-20 07:44:02.036655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.003 [2024-11-20 07:44:02.036669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.003 [2024-11-20 07:44:02.036677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.003 [2024-11-20 07:44:02.036683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.003 [2024-11-20 07:44:02.036698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.003 qpair failed and we were unable to recover it. 00:29:44.004 [2024-11-20 07:44:02.046413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.004 [2024-11-20 07:44:02.046458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.004 [2024-11-20 07:44:02.046472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.004 [2024-11-20 07:44:02.046480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.004 [2024-11-20 07:44:02.046486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.004 [2024-11-20 07:44:02.046501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.004 qpair failed and we were unable to recover it. 00:29:44.004 [2024-11-20 07:44:02.056593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.004 [2024-11-20 07:44:02.056645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.004 [2024-11-20 07:44:02.056660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.004 [2024-11-20 07:44:02.056667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.004 [2024-11-20 07:44:02.056674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.004 [2024-11-20 07:44:02.056688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.004 qpair failed and we were unable to recover it. 00:29:44.004 [2024-11-20 07:44:02.066614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.004 [2024-11-20 07:44:02.066665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.004 [2024-11-20 07:44:02.066679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.004 [2024-11-20 07:44:02.066691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.004 [2024-11-20 07:44:02.066698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.004 [2024-11-20 07:44:02.066712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.004 qpair failed and we were unable to recover it. 00:29:44.004 [2024-11-20 07:44:02.076658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.004 [2024-11-20 07:44:02.076713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.004 [2024-11-20 07:44:02.076727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.004 [2024-11-20 07:44:02.076734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.004 [2024-11-20 07:44:02.076740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.004 [2024-11-20 07:44:02.076759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.004 qpair failed and we were unable to recover it. 00:29:44.004 [2024-11-20 07:44:02.086600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.004 [2024-11-20 07:44:02.086643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.004 [2024-11-20 07:44:02.086657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.004 [2024-11-20 07:44:02.086663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.004 [2024-11-20 07:44:02.086670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.004 [2024-11-20 07:44:02.086684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.004 qpair failed and we were unable to recover it. 00:29:44.004 [2024-11-20 07:44:02.096687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.004 [2024-11-20 07:44:02.096735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.004 [2024-11-20 07:44:02.096753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.004 [2024-11-20 07:44:02.096761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.004 [2024-11-20 07:44:02.096767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.004 [2024-11-20 07:44:02.096781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.004 qpair failed and we were unable to recover it. 00:29:44.004 [2024-11-20 07:44:02.106725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.004 [2024-11-20 07:44:02.106779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.004 [2024-11-20 07:44:02.106792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.004 [2024-11-20 07:44:02.106799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.004 [2024-11-20 07:44:02.106806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.004 [2024-11-20 07:44:02.106824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.004 qpair failed and we were unable to recover it. 00:29:44.004 [2024-11-20 07:44:02.116774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.004 [2024-11-20 07:44:02.116822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.004 [2024-11-20 07:44:02.116836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.004 [2024-11-20 07:44:02.116843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.004 [2024-11-20 07:44:02.116849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.004 [2024-11-20 07:44:02.116863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.004 qpair failed and we were unable to recover it. 00:29:44.004 [2024-11-20 07:44:02.126756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.004 [2024-11-20 07:44:02.126800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.004 [2024-11-20 07:44:02.126813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.004 [2024-11-20 07:44:02.126820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.004 [2024-11-20 07:44:02.126826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.004 [2024-11-20 07:44:02.126840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.004 qpair failed and we were unable to recover it. 00:29:44.004 [2024-11-20 07:44:02.136779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.004 [2024-11-20 07:44:02.136832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.004 [2024-11-20 07:44:02.136846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.004 [2024-11-20 07:44:02.136853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.004 [2024-11-20 07:44:02.136859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.004 [2024-11-20 07:44:02.136873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.004 qpair failed and we were unable to recover it. 00:29:44.004 [2024-11-20 07:44:02.146726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.004 [2024-11-20 07:44:02.146778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.004 [2024-11-20 07:44:02.146792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.004 [2024-11-20 07:44:02.146799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.004 [2024-11-20 07:44:02.146805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.004 [2024-11-20 07:44:02.146819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.004 qpair failed and we were unable to recover it. 00:29:44.004 [2024-11-20 07:44:02.156850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.004 [2024-11-20 07:44:02.156896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.004 [2024-11-20 07:44:02.156909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.004 [2024-11-20 07:44:02.156916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.004 [2024-11-20 07:44:02.156922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.004 [2024-11-20 07:44:02.156937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.004 qpair failed and we were unable to recover it. 00:29:44.004 [2024-11-20 07:44:02.166827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.004 [2024-11-20 07:44:02.166872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.004 [2024-11-20 07:44:02.166885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.004 [2024-11-20 07:44:02.166892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.005 [2024-11-20 07:44:02.166898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.005 [2024-11-20 07:44:02.166913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.005 qpair failed and we were unable to recover it. 00:29:44.005 [2024-11-20 07:44:02.176772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.005 [2024-11-20 07:44:02.176820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.005 [2024-11-20 07:44:02.176833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.005 [2024-11-20 07:44:02.176840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.005 [2024-11-20 07:44:02.176846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.005 [2024-11-20 07:44:02.176860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.005 qpair failed and we were unable to recover it. 00:29:44.005 [2024-11-20 07:44:02.186796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.005 [2024-11-20 07:44:02.186845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.005 [2024-11-20 07:44:02.186859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.005 [2024-11-20 07:44:02.186867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.005 [2024-11-20 07:44:02.186873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.005 [2024-11-20 07:44:02.186893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.005 qpair failed and we were unable to recover it. 00:29:44.005 [2024-11-20 07:44:02.196995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.005 [2024-11-20 07:44:02.197054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.005 [2024-11-20 07:44:02.197074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.005 [2024-11-20 07:44:02.197081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.005 [2024-11-20 07:44:02.197087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.005 [2024-11-20 07:44:02.197102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.005 qpair failed and we were unable to recover it. 00:29:44.005 [2024-11-20 07:44:02.206960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.005 [2024-11-20 07:44:02.207011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.005 [2024-11-20 07:44:02.207026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.005 [2024-11-20 07:44:02.207033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.005 [2024-11-20 07:44:02.207039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.005 [2024-11-20 07:44:02.207054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.005 qpair failed and we were unable to recover it. 00:29:44.266 [2024-11-20 07:44:02.216999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.266 [2024-11-20 07:44:02.217044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.266 [2024-11-20 07:44:02.217058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.266 [2024-11-20 07:44:02.217065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.266 [2024-11-20 07:44:02.217071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.266 [2024-11-20 07:44:02.217086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.266 qpair failed and we were unable to recover it. 00:29:44.266 [2024-11-20 07:44:02.227035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.266 [2024-11-20 07:44:02.227105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.266 [2024-11-20 07:44:02.227118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.266 [2024-11-20 07:44:02.227125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.266 [2024-11-20 07:44:02.227132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.266 [2024-11-20 07:44:02.227146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.266 qpair failed and we were unable to recover it. 00:29:44.266 [2024-11-20 07:44:02.237095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.266 [2024-11-20 07:44:02.237171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.266 [2024-11-20 07:44:02.237185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.266 [2024-11-20 07:44:02.237192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.266 [2024-11-20 07:44:02.237201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.266 [2024-11-20 07:44:02.237216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.266 qpair failed and we were unable to recover it. 00:29:44.266 [2024-11-20 07:44:02.247077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.266 [2024-11-20 07:44:02.247119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.266 [2024-11-20 07:44:02.247132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.266 [2024-11-20 07:44:02.247139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.266 [2024-11-20 07:44:02.247145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.266 [2024-11-20 07:44:02.247159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.266 qpair failed and we were unable to recover it. 00:29:44.266 [2024-11-20 07:44:02.257094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.266 [2024-11-20 07:44:02.257141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.266 [2024-11-20 07:44:02.257154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.266 [2024-11-20 07:44:02.257161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.266 [2024-11-20 07:44:02.257167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.266 [2024-11-20 07:44:02.257181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.266 qpair failed and we were unable to recover it. 00:29:44.266 [2024-11-20 07:44:02.267002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.267 [2024-11-20 07:44:02.267050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.267 [2024-11-20 07:44:02.267062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.267 [2024-11-20 07:44:02.267070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.267 [2024-11-20 07:44:02.267076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.267 [2024-11-20 07:44:02.267090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.267 qpair failed and we were unable to recover it. 00:29:44.267 [2024-11-20 07:44:02.277194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.267 [2024-11-20 07:44:02.277240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.267 [2024-11-20 07:44:02.277254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.267 [2024-11-20 07:44:02.277261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.267 [2024-11-20 07:44:02.277267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.267 [2024-11-20 07:44:02.277282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.267 qpair failed and we were unable to recover it. 00:29:44.267 [2024-11-20 07:44:02.287036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.267 [2024-11-20 07:44:02.287079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.267 [2024-11-20 07:44:02.287092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.267 [2024-11-20 07:44:02.287099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.267 [2024-11-20 07:44:02.287105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.267 [2024-11-20 07:44:02.287119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.267 qpair failed and we were unable to recover it. 00:29:44.267 [2024-11-20 07:44:02.297207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.267 [2024-11-20 07:44:02.297299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.267 [2024-11-20 07:44:02.297313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.267 [2024-11-20 07:44:02.297320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.267 [2024-11-20 07:44:02.297327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.267 [2024-11-20 07:44:02.297342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.267 qpair failed and we were unable to recover it. 00:29:44.267 [2024-11-20 07:44:02.307247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.267 [2024-11-20 07:44:02.307298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.267 [2024-11-20 07:44:02.307312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.267 [2024-11-20 07:44:02.307318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.267 [2024-11-20 07:44:02.307325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.267 [2024-11-20 07:44:02.307339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.267 qpair failed and we were unable to recover it. 00:29:44.267 [2024-11-20 07:44:02.317312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.267 [2024-11-20 07:44:02.317363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.267 [2024-11-20 07:44:02.317376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.267 [2024-11-20 07:44:02.317383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.267 [2024-11-20 07:44:02.317389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.267 [2024-11-20 07:44:02.317403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.267 qpair failed and we were unable to recover it. 00:29:44.267 [2024-11-20 07:44:02.327281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.267 [2024-11-20 07:44:02.327326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.267 [2024-11-20 07:44:02.327342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.267 [2024-11-20 07:44:02.327349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.267 [2024-11-20 07:44:02.327355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.267 [2024-11-20 07:44:02.327370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.267 qpair failed and we were unable to recover it. 00:29:44.267 [2024-11-20 07:44:02.337315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.267 [2024-11-20 07:44:02.337388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.267 [2024-11-20 07:44:02.337401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.267 [2024-11-20 07:44:02.337408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.267 [2024-11-20 07:44:02.337415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.267 [2024-11-20 07:44:02.337429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.267 qpair failed and we were unable to recover it. 00:29:44.267 [2024-11-20 07:44:02.347341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.267 [2024-11-20 07:44:02.347389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.267 [2024-11-20 07:44:02.347402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.267 [2024-11-20 07:44:02.347409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.267 [2024-11-20 07:44:02.347415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.267 [2024-11-20 07:44:02.347429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.267 qpair failed and we were unable to recover it. 00:29:44.267 [2024-11-20 07:44:02.357377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.267 [2024-11-20 07:44:02.357436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.267 [2024-11-20 07:44:02.357449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.267 [2024-11-20 07:44:02.357455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.267 [2024-11-20 07:44:02.357462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.267 [2024-11-20 07:44:02.357476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.267 qpair failed and we were unable to recover it. 00:29:44.267 [2024-11-20 07:44:02.367373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.267 [2024-11-20 07:44:02.367414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.267 [2024-11-20 07:44:02.367427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.267 [2024-11-20 07:44:02.367434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.267 [2024-11-20 07:44:02.367444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.267 [2024-11-20 07:44:02.367458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.267 qpair failed and we were unable to recover it. 00:29:44.267 [2024-11-20 07:44:02.377439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.267 [2024-11-20 07:44:02.377486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.267 [2024-11-20 07:44:02.377499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.267 [2024-11-20 07:44:02.377506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.267 [2024-11-20 07:44:02.377512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.267 [2024-11-20 07:44:02.377526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.267 qpair failed and we were unable to recover it. 00:29:44.267 [2024-11-20 07:44:02.387456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.267 [2024-11-20 07:44:02.387508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.267 [2024-11-20 07:44:02.387521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.267 [2024-11-20 07:44:02.387528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.267 [2024-11-20 07:44:02.387534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.267 [2024-11-20 07:44:02.387548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.267 qpair failed and we were unable to recover it. 00:29:44.267 [2024-11-20 07:44:02.397515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.268 [2024-11-20 07:44:02.397593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.268 [2024-11-20 07:44:02.397607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.268 [2024-11-20 07:44:02.397615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.268 [2024-11-20 07:44:02.397621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.268 [2024-11-20 07:44:02.397639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.268 qpair failed and we were unable to recover it. 00:29:44.268 [2024-11-20 07:44:02.407367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.268 [2024-11-20 07:44:02.407460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.268 [2024-11-20 07:44:02.407474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.268 [2024-11-20 07:44:02.407481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.268 [2024-11-20 07:44:02.407487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.268 [2024-11-20 07:44:02.407501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.268 qpair failed and we were unable to recover it. 00:29:44.268 [2024-11-20 07:44:02.417546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.268 [2024-11-20 07:44:02.417642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.268 [2024-11-20 07:44:02.417656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.268 [2024-11-20 07:44:02.417663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.268 [2024-11-20 07:44:02.417670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.268 [2024-11-20 07:44:02.417684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.268 qpair failed and we were unable to recover it. 00:29:44.268 [2024-11-20 07:44:02.427430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.268 [2024-11-20 07:44:02.427474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.268 [2024-11-20 07:44:02.427487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.268 [2024-11-20 07:44:02.427494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.268 [2024-11-20 07:44:02.427500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.268 [2024-11-20 07:44:02.427515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.268 qpair failed and we were unable to recover it. 00:29:44.268 [2024-11-20 07:44:02.437606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.268 [2024-11-20 07:44:02.437659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.268 [2024-11-20 07:44:02.437672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.268 [2024-11-20 07:44:02.437679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.268 [2024-11-20 07:44:02.437685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.268 [2024-11-20 07:44:02.437699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.268 qpair failed and we were unable to recover it. 00:29:44.268 [2024-11-20 07:44:02.447581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.268 [2024-11-20 07:44:02.447623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.268 [2024-11-20 07:44:02.447636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.268 [2024-11-20 07:44:02.447643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.268 [2024-11-20 07:44:02.447649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.268 [2024-11-20 07:44:02.447663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.268 qpair failed and we were unable to recover it. 00:29:44.268 [2024-11-20 07:44:02.457515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.268 [2024-11-20 07:44:02.457562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.268 [2024-11-20 07:44:02.457578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.268 [2024-11-20 07:44:02.457585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.268 [2024-11-20 07:44:02.457591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.268 [2024-11-20 07:44:02.457605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.268 qpair failed and we were unable to recover it. 00:29:44.268 [2024-11-20 07:44:02.467613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.268 [2024-11-20 07:44:02.467694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.268 [2024-11-20 07:44:02.467707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.268 [2024-11-20 07:44:02.467714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.268 [2024-11-20 07:44:02.467720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.268 [2024-11-20 07:44:02.467734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.268 qpair failed and we were unable to recover it. 00:29:44.530 [2024-11-20 07:44:02.477617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.530 [2024-11-20 07:44:02.477669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.530 [2024-11-20 07:44:02.477682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.530 [2024-11-20 07:44:02.477689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.530 [2024-11-20 07:44:02.477696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.530 [2024-11-20 07:44:02.477709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.530 qpair failed and we were unable to recover it. 00:29:44.530 [2024-11-20 07:44:02.487722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.530 [2024-11-20 07:44:02.487768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.530 [2024-11-20 07:44:02.487782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.530 [2024-11-20 07:44:02.487792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.530 [2024-11-20 07:44:02.487801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.530 [2024-11-20 07:44:02.487817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.530 qpair failed and we were unable to recover it. 00:29:44.530 [2024-11-20 07:44:02.497790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.530 [2024-11-20 07:44:02.497836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.530 [2024-11-20 07:44:02.497849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.530 [2024-11-20 07:44:02.497859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.530 [2024-11-20 07:44:02.497866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.530 [2024-11-20 07:44:02.497880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.530 qpair failed and we were unable to recover it. 00:29:44.530 [2024-11-20 07:44:02.507778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.530 [2024-11-20 07:44:02.507831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.530 [2024-11-20 07:44:02.507844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.530 [2024-11-20 07:44:02.507851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.530 [2024-11-20 07:44:02.507857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.530 [2024-11-20 07:44:02.507871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.530 qpair failed and we were unable to recover it. 00:29:44.530 [2024-11-20 07:44:02.517865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.530 [2024-11-20 07:44:02.517915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.530 [2024-11-20 07:44:02.517928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.530 [2024-11-20 07:44:02.517935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.530 [2024-11-20 07:44:02.517941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.530 [2024-11-20 07:44:02.517956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.530 qpair failed and we were unable to recover it. 00:29:44.530 [2024-11-20 07:44:02.527784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.530 [2024-11-20 07:44:02.527831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.530 [2024-11-20 07:44:02.527845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.531 [2024-11-20 07:44:02.527852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.531 [2024-11-20 07:44:02.527859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.531 [2024-11-20 07:44:02.527873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.531 qpair failed and we were unable to recover it. 00:29:44.531 [2024-11-20 07:44:02.537866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.531 [2024-11-20 07:44:02.537946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.531 [2024-11-20 07:44:02.537960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.531 [2024-11-20 07:44:02.537967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.531 [2024-11-20 07:44:02.537973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.531 [2024-11-20 07:44:02.537991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.531 qpair failed and we were unable to recover it. 00:29:44.531 [2024-11-20 07:44:02.547886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.531 [2024-11-20 07:44:02.547936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.531 [2024-11-20 07:44:02.547949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.531 [2024-11-20 07:44:02.547956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.531 [2024-11-20 07:44:02.547963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.531 [2024-11-20 07:44:02.547977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.531 qpair failed and we were unable to recover it. 00:29:44.531 [2024-11-20 07:44:02.557951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.531 [2024-11-20 07:44:02.557998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.531 [2024-11-20 07:44:02.558011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.531 [2024-11-20 07:44:02.558018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.531 [2024-11-20 07:44:02.558024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.531 [2024-11-20 07:44:02.558038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.531 qpair failed and we were unable to recover it. 00:29:44.531 [2024-11-20 07:44:02.567892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.531 [2024-11-20 07:44:02.567965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.531 [2024-11-20 07:44:02.567978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.531 [2024-11-20 07:44:02.567986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.531 [2024-11-20 07:44:02.567992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.531 [2024-11-20 07:44:02.568006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.531 qpair failed and we were unable to recover it. 00:29:44.531 [2024-11-20 07:44:02.577969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.531 [2024-11-20 07:44:02.578020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.531 [2024-11-20 07:44:02.578032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.531 [2024-11-20 07:44:02.578039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.531 [2024-11-20 07:44:02.578045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.531 [2024-11-20 07:44:02.578059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.531 qpair failed and we were unable to recover it. 00:29:44.531 [2024-11-20 07:44:02.588010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.531 [2024-11-20 07:44:02.588069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.531 [2024-11-20 07:44:02.588082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.531 [2024-11-20 07:44:02.588089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.531 [2024-11-20 07:44:02.588095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.531 [2024-11-20 07:44:02.588109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.531 qpair failed and we were unable to recover it. 00:29:44.531 [2024-11-20 07:44:02.598046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.531 [2024-11-20 07:44:02.598095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.531 [2024-11-20 07:44:02.598108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.531 [2024-11-20 07:44:02.598115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.531 [2024-11-20 07:44:02.598121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.531 [2024-11-20 07:44:02.598135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.531 qpair failed and we were unable to recover it. 00:29:44.531 [2024-11-20 07:44:02.608034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.531 [2024-11-20 07:44:02.608076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.531 [2024-11-20 07:44:02.608089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.531 [2024-11-20 07:44:02.608096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.531 [2024-11-20 07:44:02.608102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.531 [2024-11-20 07:44:02.608116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.531 qpair failed and we were unable to recover it. 00:29:44.531 [2024-11-20 07:44:02.618072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.531 [2024-11-20 07:44:02.618121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.531 [2024-11-20 07:44:02.618133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.531 [2024-11-20 07:44:02.618140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.531 [2024-11-20 07:44:02.618147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.531 [2024-11-20 07:44:02.618160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.531 qpair failed and we were unable to recover it. 00:29:44.531 [2024-11-20 07:44:02.628101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.531 [2024-11-20 07:44:02.628150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.531 [2024-11-20 07:44:02.628163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.531 [2024-11-20 07:44:02.628174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.531 [2024-11-20 07:44:02.628180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.531 [2024-11-20 07:44:02.628194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.531 qpair failed and we were unable to recover it. 00:29:44.531 [2024-11-20 07:44:02.638283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.531 [2024-11-20 07:44:02.638351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.531 [2024-11-20 07:44:02.638364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.531 [2024-11-20 07:44:02.638371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.531 [2024-11-20 07:44:02.638377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.532 [2024-11-20 07:44:02.638391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.532 qpair failed and we were unable to recover it. 00:29:44.532 [2024-11-20 07:44:02.648158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.532 [2024-11-20 07:44:02.648201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.532 [2024-11-20 07:44:02.648214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.532 [2024-11-20 07:44:02.648221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.532 [2024-11-20 07:44:02.648227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.532 [2024-11-20 07:44:02.648241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.532 qpair failed and we were unable to recover it. 00:29:44.532 [2024-11-20 07:44:02.658192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.532 [2024-11-20 07:44:02.658237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.532 [2024-11-20 07:44:02.658251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.532 [2024-11-20 07:44:02.658257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.532 [2024-11-20 07:44:02.658264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.532 [2024-11-20 07:44:02.658277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.532 qpair failed and we were unable to recover it. 00:29:44.532 [2024-11-20 07:44:02.668237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.532 [2024-11-20 07:44:02.668286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.532 [2024-11-20 07:44:02.668299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.532 [2024-11-20 07:44:02.668306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.532 [2024-11-20 07:44:02.668312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.532 [2024-11-20 07:44:02.668330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.532 qpair failed and we were unable to recover it. 00:29:44.532 [2024-11-20 07:44:02.678123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.532 [2024-11-20 07:44:02.678173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.532 [2024-11-20 07:44:02.678186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.532 [2024-11-20 07:44:02.678193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.532 [2024-11-20 07:44:02.678199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.532 [2024-11-20 07:44:02.678213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.532 qpair failed and we were unable to recover it. 00:29:44.532 [2024-11-20 07:44:02.688115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.532 [2024-11-20 07:44:02.688164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.532 [2024-11-20 07:44:02.688176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.532 [2024-11-20 07:44:02.688183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.532 [2024-11-20 07:44:02.688189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.532 [2024-11-20 07:44:02.688203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.532 qpair failed and we were unable to recover it. 00:29:44.532 [2024-11-20 07:44:02.698302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.532 [2024-11-20 07:44:02.698395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.532 [2024-11-20 07:44:02.698407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.532 [2024-11-20 07:44:02.698414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.532 [2024-11-20 07:44:02.698420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.532 [2024-11-20 07:44:02.698434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.532 qpair failed and we were unable to recover it. 00:29:44.532 [2024-11-20 07:44:02.708334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.532 [2024-11-20 07:44:02.708425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.532 [2024-11-20 07:44:02.708438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.532 [2024-11-20 07:44:02.708445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.532 [2024-11-20 07:44:02.708451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.532 [2024-11-20 07:44:02.708465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.532 qpair failed and we were unable to recover it. 00:29:44.532 [2024-11-20 07:44:02.718339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.532 [2024-11-20 07:44:02.718390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.532 [2024-11-20 07:44:02.718403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.532 [2024-11-20 07:44:02.718410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.532 [2024-11-20 07:44:02.718416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.532 [2024-11-20 07:44:02.718431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.532 qpair failed and we were unable to recover it. 00:29:44.532 [2024-11-20 07:44:02.728352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.532 [2024-11-20 07:44:02.728397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.532 [2024-11-20 07:44:02.728410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.532 [2024-11-20 07:44:02.728418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.532 [2024-11-20 07:44:02.728425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.532 [2024-11-20 07:44:02.728440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.532 qpair failed and we were unable to recover it. 00:29:44.795 [2024-11-20 07:44:02.738372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.795 [2024-11-20 07:44:02.738420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.795 [2024-11-20 07:44:02.738433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.795 [2024-11-20 07:44:02.738440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.795 [2024-11-20 07:44:02.738446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.795 [2024-11-20 07:44:02.738460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.795 qpair failed and we were unable to recover it. 00:29:44.795 [2024-11-20 07:44:02.748417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.795 [2024-11-20 07:44:02.748467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.795 [2024-11-20 07:44:02.748481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.795 [2024-11-20 07:44:02.748488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.795 [2024-11-20 07:44:02.748494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.795 [2024-11-20 07:44:02.748508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.795 qpair failed and we were unable to recover it. 00:29:44.795 [2024-11-20 07:44:02.758452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.795 [2024-11-20 07:44:02.758515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.795 [2024-11-20 07:44:02.758531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.795 [2024-11-20 07:44:02.758538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.795 [2024-11-20 07:44:02.758544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.795 [2024-11-20 07:44:02.758558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.795 qpair failed and we were unable to recover it. 00:29:44.795 [2024-11-20 07:44:02.768453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.795 [2024-11-20 07:44:02.768499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.795 [2024-11-20 07:44:02.768512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.795 [2024-11-20 07:44:02.768519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.795 [2024-11-20 07:44:02.768525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.795 [2024-11-20 07:44:02.768539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.795 qpair failed and we were unable to recover it. 00:29:44.795 [2024-11-20 07:44:02.778503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.795 [2024-11-20 07:44:02.778578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.795 [2024-11-20 07:44:02.778591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.795 [2024-11-20 07:44:02.778598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.795 [2024-11-20 07:44:02.778604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.795 [2024-11-20 07:44:02.778618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.795 qpair failed and we were unable to recover it. 00:29:44.795 [2024-11-20 07:44:02.788533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.795 [2024-11-20 07:44:02.788582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.795 [2024-11-20 07:44:02.788595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.795 [2024-11-20 07:44:02.788602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.795 [2024-11-20 07:44:02.788608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.795 [2024-11-20 07:44:02.788622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.795 qpair failed and we were unable to recover it. 00:29:44.795 [2024-11-20 07:44:02.798587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.795 [2024-11-20 07:44:02.798632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.795 [2024-11-20 07:44:02.798645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.795 [2024-11-20 07:44:02.798652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.795 [2024-11-20 07:44:02.798662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.795 [2024-11-20 07:44:02.798676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.795 qpair failed and we were unable to recover it. 00:29:44.795 [2024-11-20 07:44:02.808581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.795 [2024-11-20 07:44:02.808628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.795 [2024-11-20 07:44:02.808640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.795 [2024-11-20 07:44:02.808648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.795 [2024-11-20 07:44:02.808654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.795 [2024-11-20 07:44:02.808667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.795 qpair failed and we were unable to recover it. 00:29:44.795 [2024-11-20 07:44:02.818589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.795 [2024-11-20 07:44:02.818636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.795 [2024-11-20 07:44:02.818648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.795 [2024-11-20 07:44:02.818655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.796 [2024-11-20 07:44:02.818661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.796 [2024-11-20 07:44:02.818675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.796 qpair failed and we were unable to recover it. 00:29:44.796 [2024-11-20 07:44:02.828607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.796 [2024-11-20 07:44:02.828698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.796 [2024-11-20 07:44:02.828711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.796 [2024-11-20 07:44:02.828718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.796 [2024-11-20 07:44:02.828724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.796 [2024-11-20 07:44:02.828738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.796 qpair failed and we were unable to recover it. 00:29:44.796 [2024-11-20 07:44:02.838559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.796 [2024-11-20 07:44:02.838606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.796 [2024-11-20 07:44:02.838619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.796 [2024-11-20 07:44:02.838626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.796 [2024-11-20 07:44:02.838632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.796 [2024-11-20 07:44:02.838646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.796 qpair failed and we were unable to recover it. 00:29:44.796 [2024-11-20 07:44:02.848538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.796 [2024-11-20 07:44:02.848580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.796 [2024-11-20 07:44:02.848595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.796 [2024-11-20 07:44:02.848602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.796 [2024-11-20 07:44:02.848608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.796 [2024-11-20 07:44:02.848629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.796 qpair failed and we were unable to recover it. 00:29:44.796 [2024-11-20 07:44:02.858688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.796 [2024-11-20 07:44:02.858732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.796 [2024-11-20 07:44:02.858750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.796 [2024-11-20 07:44:02.858758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.796 [2024-11-20 07:44:02.858764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.796 [2024-11-20 07:44:02.858779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.796 qpair failed and we were unable to recover it. 00:29:44.796 [2024-11-20 07:44:02.868639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.796 [2024-11-20 07:44:02.868688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.796 [2024-11-20 07:44:02.868701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.796 [2024-11-20 07:44:02.868708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.796 [2024-11-20 07:44:02.868714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.796 [2024-11-20 07:44:02.868729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.796 qpair failed and we were unable to recover it. 00:29:44.796 [2024-11-20 07:44:02.878800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.796 [2024-11-20 07:44:02.878847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.796 [2024-11-20 07:44:02.878860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.796 [2024-11-20 07:44:02.878867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.796 [2024-11-20 07:44:02.878873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.796 [2024-11-20 07:44:02.878888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.796 qpair failed and we were unable to recover it. 00:29:44.796 [2024-11-20 07:44:02.888773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.796 [2024-11-20 07:44:02.888824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.796 [2024-11-20 07:44:02.888844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.796 [2024-11-20 07:44:02.888851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.796 [2024-11-20 07:44:02.888858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.796 [2024-11-20 07:44:02.888872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.796 qpair failed and we were unable to recover it. 00:29:44.796 [2024-11-20 07:44:02.898836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.796 [2024-11-20 07:44:02.898880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.796 [2024-11-20 07:44:02.898893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.796 [2024-11-20 07:44:02.898901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.796 [2024-11-20 07:44:02.898907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.796 [2024-11-20 07:44:02.898921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.796 qpair failed and we were unable to recover it. 00:29:44.796 [2024-11-20 07:44:02.908856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.796 [2024-11-20 07:44:02.908904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.796 [2024-11-20 07:44:02.908916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.796 [2024-11-20 07:44:02.908924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.796 [2024-11-20 07:44:02.908930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.796 [2024-11-20 07:44:02.908945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.796 qpair failed and we were unable to recover it. 00:29:44.796 [2024-11-20 07:44:02.918807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.796 [2024-11-20 07:44:02.918852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.796 [2024-11-20 07:44:02.918865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.796 [2024-11-20 07:44:02.918872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.796 [2024-11-20 07:44:02.918878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.796 [2024-11-20 07:44:02.918892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.796 qpair failed and we were unable to recover it. 00:29:44.796 [2024-11-20 07:44:02.928895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.796 [2024-11-20 07:44:02.928937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.796 [2024-11-20 07:44:02.928950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.796 [2024-11-20 07:44:02.928957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.796 [2024-11-20 07:44:02.928967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.796 [2024-11-20 07:44:02.928981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.796 qpair failed and we were unable to recover it. 00:29:44.796 [2024-11-20 07:44:02.938932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.796 [2024-11-20 07:44:02.938977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.796 [2024-11-20 07:44:02.938989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.796 [2024-11-20 07:44:02.938996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.796 [2024-11-20 07:44:02.939002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.796 [2024-11-20 07:44:02.939016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.796 qpair failed and we were unable to recover it. 00:29:44.796 [2024-11-20 07:44:02.948996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.796 [2024-11-20 07:44:02.949049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.796 [2024-11-20 07:44:02.949062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.797 [2024-11-20 07:44:02.949069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.797 [2024-11-20 07:44:02.949076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.797 [2024-11-20 07:44:02.949089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.797 qpair failed and we were unable to recover it. 00:29:44.797 [2024-11-20 07:44:02.958994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.797 [2024-11-20 07:44:02.959045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.797 [2024-11-20 07:44:02.959058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.797 [2024-11-20 07:44:02.959065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.797 [2024-11-20 07:44:02.959072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.797 [2024-11-20 07:44:02.959086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.797 qpair failed and we were unable to recover it. 00:29:44.797 [2024-11-20 07:44:02.968979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.797 [2024-11-20 07:44:02.969049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.797 [2024-11-20 07:44:02.969062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.797 [2024-11-20 07:44:02.969069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.797 [2024-11-20 07:44:02.969075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.797 [2024-11-20 07:44:02.969090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.797 qpair failed and we were unable to recover it. 00:29:44.797 [2024-11-20 07:44:02.978906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.797 [2024-11-20 07:44:02.978951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.797 [2024-11-20 07:44:02.978964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.797 [2024-11-20 07:44:02.978971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.797 [2024-11-20 07:44:02.978978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.797 [2024-11-20 07:44:02.978992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.797 qpair failed and we were unable to recover it. 00:29:44.797 [2024-11-20 07:44:02.989056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.797 [2024-11-20 07:44:02.989103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.797 [2024-11-20 07:44:02.989117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.797 [2024-11-20 07:44:02.989124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.797 [2024-11-20 07:44:02.989130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:44.797 [2024-11-20 07:44:02.989144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:44.797 qpair failed and we were unable to recover it. 00:29:45.059 [2024-11-20 07:44:02.998998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.059 [2024-11-20 07:44:02.999045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.059 [2024-11-20 07:44:02.999058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.059 [2024-11-20 07:44:02.999065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.059 [2024-11-20 07:44:02.999071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.059 [2024-11-20 07:44:02.999086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.059 qpair failed and we were unable to recover it. 00:29:45.059 [2024-11-20 07:44:03.009104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.059 [2024-11-20 07:44:03.009147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.059 [2024-11-20 07:44:03.009160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.059 [2024-11-20 07:44:03.009167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.059 [2024-11-20 07:44:03.009173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.059 [2024-11-20 07:44:03.009187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.059 qpair failed and we were unable to recover it. 00:29:45.059 [2024-11-20 07:44:03.019129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.059 [2024-11-20 07:44:03.019183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.059 [2024-11-20 07:44:03.019200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.059 [2024-11-20 07:44:03.019207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.059 [2024-11-20 07:44:03.019213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.059 [2024-11-20 07:44:03.019227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.059 qpair failed and we were unable to recover it. 00:29:45.059 [2024-11-20 07:44:03.029032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.059 [2024-11-20 07:44:03.029080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.059 [2024-11-20 07:44:03.029093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.059 [2024-11-20 07:44:03.029099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.059 [2024-11-20 07:44:03.029106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.059 [2024-11-20 07:44:03.029120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.059 qpair failed and we were unable to recover it. 00:29:45.059 [2024-11-20 07:44:03.039089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.059 [2024-11-20 07:44:03.039141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.059 [2024-11-20 07:44:03.039154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.059 [2024-11-20 07:44:03.039161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.059 [2024-11-20 07:44:03.039168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.059 [2024-11-20 07:44:03.039182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.059 qpair failed and we were unable to recover it. 00:29:45.059 [2024-11-20 07:44:03.049075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.059 [2024-11-20 07:44:03.049122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.059 [2024-11-20 07:44:03.049135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.059 [2024-11-20 07:44:03.049141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.059 [2024-11-20 07:44:03.049148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.059 [2024-11-20 07:44:03.049162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.059 qpair failed and we were unable to recover it. 00:29:45.059 [2024-11-20 07:44:03.059107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.059 [2024-11-20 07:44:03.059152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.059 [2024-11-20 07:44:03.059165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.059 [2024-11-20 07:44:03.059175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.059 [2024-11-20 07:44:03.059181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.059 [2024-11-20 07:44:03.059195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.059 qpair failed and we were unable to recover it. 00:29:45.059 [2024-11-20 07:44:03.069141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.059 [2024-11-20 07:44:03.069190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.059 [2024-11-20 07:44:03.069203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.059 [2024-11-20 07:44:03.069210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.059 [2024-11-20 07:44:03.069216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.060 [2024-11-20 07:44:03.069231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.060 qpair failed and we were unable to recover it. 00:29:45.060 [2024-11-20 07:44:03.079318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.060 [2024-11-20 07:44:03.079363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.060 [2024-11-20 07:44:03.079376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.060 [2024-11-20 07:44:03.079383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.060 [2024-11-20 07:44:03.079389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.060 [2024-11-20 07:44:03.079403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.060 qpair failed and we were unable to recover it. 00:29:45.060 [2024-11-20 07:44:03.089278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.060 [2024-11-20 07:44:03.089323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.060 [2024-11-20 07:44:03.089336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.060 [2024-11-20 07:44:03.089342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.060 [2024-11-20 07:44:03.089349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.060 [2024-11-20 07:44:03.089363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.060 qpair failed and we were unable to recover it. 00:29:45.060 [2024-11-20 07:44:03.099337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.060 [2024-11-20 07:44:03.099421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.060 [2024-11-20 07:44:03.099434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.060 [2024-11-20 07:44:03.099440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.060 [2024-11-20 07:44:03.099447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.060 [2024-11-20 07:44:03.099464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.060 qpair failed and we were unable to recover it. 00:29:45.060 [2024-11-20 07:44:03.109344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.060 [2024-11-20 07:44:03.109395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.060 [2024-11-20 07:44:03.109408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.060 [2024-11-20 07:44:03.109415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.060 [2024-11-20 07:44:03.109421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.060 [2024-11-20 07:44:03.109436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.060 qpair failed and we were unable to recover it. 00:29:45.060 [2024-11-20 07:44:03.119403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.060 [2024-11-20 07:44:03.119453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.060 [2024-11-20 07:44:03.119466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.060 [2024-11-20 07:44:03.119473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.060 [2024-11-20 07:44:03.119479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.060 [2024-11-20 07:44:03.119493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.060 qpair failed and we were unable to recover it. 00:29:45.060 [2024-11-20 07:44:03.129381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.060 [2024-11-20 07:44:03.129422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.060 [2024-11-20 07:44:03.129435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.060 [2024-11-20 07:44:03.129442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.060 [2024-11-20 07:44:03.129448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.060 [2024-11-20 07:44:03.129462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.060 qpair failed and we were unable to recover it. 00:29:45.060 [2024-11-20 07:44:03.139422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.060 [2024-11-20 07:44:03.139469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.060 [2024-11-20 07:44:03.139482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.060 [2024-11-20 07:44:03.139489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.060 [2024-11-20 07:44:03.139495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.060 [2024-11-20 07:44:03.139509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.060 qpair failed and we were unable to recover it. 00:29:45.060 [2024-11-20 07:44:03.149475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.060 [2024-11-20 07:44:03.149524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.060 [2024-11-20 07:44:03.149537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.060 [2024-11-20 07:44:03.149545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.060 [2024-11-20 07:44:03.149551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.060 [2024-11-20 07:44:03.149565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.060 qpair failed and we were unable to recover it. 00:29:45.060 [2024-11-20 07:44:03.159537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.060 [2024-11-20 07:44:03.159590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.060 [2024-11-20 07:44:03.159602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.060 [2024-11-20 07:44:03.159609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.060 [2024-11-20 07:44:03.159616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.060 [2024-11-20 07:44:03.159630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.060 qpair failed and we were unable to recover it. 00:29:45.060 [2024-11-20 07:44:03.169521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.060 [2024-11-20 07:44:03.169568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.060 [2024-11-20 07:44:03.169581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.060 [2024-11-20 07:44:03.169587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.060 [2024-11-20 07:44:03.169594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.060 [2024-11-20 07:44:03.169608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.060 qpair failed and we were unable to recover it. 00:29:45.060 [2024-11-20 07:44:03.179573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.060 [2024-11-20 07:44:03.179621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.060 [2024-11-20 07:44:03.179634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.060 [2024-11-20 07:44:03.179641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.060 [2024-11-20 07:44:03.179647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.060 [2024-11-20 07:44:03.179661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.060 qpair failed and we were unable to recover it. 00:29:45.060 [2024-11-20 07:44:03.189571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.060 [2024-11-20 07:44:03.189619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.060 [2024-11-20 07:44:03.189631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.060 [2024-11-20 07:44:03.189642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.060 [2024-11-20 07:44:03.189648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.060 [2024-11-20 07:44:03.189662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.060 qpair failed and we were unable to recover it. 00:29:45.060 [2024-11-20 07:44:03.199622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.060 [2024-11-20 07:44:03.199668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.060 [2024-11-20 07:44:03.199682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.060 [2024-11-20 07:44:03.199689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.060 [2024-11-20 07:44:03.199695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.061 [2024-11-20 07:44:03.199709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.061 qpair failed and we were unable to recover it. 00:29:45.061 [2024-11-20 07:44:03.209653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.061 [2024-11-20 07:44:03.209722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.061 [2024-11-20 07:44:03.209736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.061 [2024-11-20 07:44:03.209743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.061 [2024-11-20 07:44:03.209753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.061 [2024-11-20 07:44:03.209767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.061 qpair failed and we were unable to recover it. 00:29:45.061 [2024-11-20 07:44:03.219646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.061 [2024-11-20 07:44:03.219692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.061 [2024-11-20 07:44:03.219704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.061 [2024-11-20 07:44:03.219712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.061 [2024-11-20 07:44:03.219718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.061 [2024-11-20 07:44:03.219732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.061 qpair failed and we were unable to recover it. 00:29:45.061 [2024-11-20 07:44:03.229678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.061 [2024-11-20 07:44:03.229723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.061 [2024-11-20 07:44:03.229736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.061 [2024-11-20 07:44:03.229743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.061 [2024-11-20 07:44:03.229753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.061 [2024-11-20 07:44:03.229771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.061 qpair failed and we were unable to recover it. 00:29:45.061 [2024-11-20 07:44:03.239736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.061 [2024-11-20 07:44:03.239788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.061 [2024-11-20 07:44:03.239801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.061 [2024-11-20 07:44:03.239808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.061 [2024-11-20 07:44:03.239814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.061 [2024-11-20 07:44:03.239829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.061 qpair failed and we were unable to recover it. 00:29:45.061 [2024-11-20 07:44:03.249725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.061 [2024-11-20 07:44:03.249764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.061 [2024-11-20 07:44:03.249777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.061 [2024-11-20 07:44:03.249784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.061 [2024-11-20 07:44:03.249790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.061 [2024-11-20 07:44:03.249805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.061 qpair failed and we were unable to recover it. 00:29:45.061 [2024-11-20 07:44:03.259743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.061 [2024-11-20 07:44:03.259789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.061 [2024-11-20 07:44:03.259802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.061 [2024-11-20 07:44:03.259809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.061 [2024-11-20 07:44:03.259815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.061 [2024-11-20 07:44:03.259829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.061 qpair failed and we were unable to recover it. 00:29:45.323 [2024-11-20 07:44:03.269835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.323 [2024-11-20 07:44:03.269904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.323 [2024-11-20 07:44:03.269917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.323 [2024-11-20 07:44:03.269924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.323 [2024-11-20 07:44:03.269930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.323 [2024-11-20 07:44:03.269944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.323 qpair failed and we were unable to recover it. 00:29:45.323 [2024-11-20 07:44:03.279845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.323 [2024-11-20 07:44:03.279891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.323 [2024-11-20 07:44:03.279904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.323 [2024-11-20 07:44:03.279911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.323 [2024-11-20 07:44:03.279917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.323 [2024-11-20 07:44:03.279931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.323 qpair failed and we were unable to recover it. 00:29:45.323 [2024-11-20 07:44:03.289837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.323 [2024-11-20 07:44:03.289912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.323 [2024-11-20 07:44:03.289925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.323 [2024-11-20 07:44:03.289932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.323 [2024-11-20 07:44:03.289938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.323 [2024-11-20 07:44:03.289952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.323 qpair failed and we were unable to recover it. 00:29:45.323 [2024-11-20 07:44:03.299867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.323 [2024-11-20 07:44:03.299910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.323 [2024-11-20 07:44:03.299923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.323 [2024-11-20 07:44:03.299930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.323 [2024-11-20 07:44:03.299936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.323 [2024-11-20 07:44:03.299950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.323 qpair failed and we were unable to recover it. 00:29:45.323 [2024-11-20 07:44:03.309913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.323 [2024-11-20 07:44:03.309956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.323 [2024-11-20 07:44:03.309969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.323 [2024-11-20 07:44:03.309976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.323 [2024-11-20 07:44:03.309982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.323 [2024-11-20 07:44:03.309996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.323 qpair failed and we were unable to recover it. 00:29:45.323 [2024-11-20 07:44:03.319964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.324 [2024-11-20 07:44:03.320031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.324 [2024-11-20 07:44:03.320047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.324 [2024-11-20 07:44:03.320054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.324 [2024-11-20 07:44:03.320060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.324 [2024-11-20 07:44:03.320074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.324 qpair failed and we were unable to recover it. 00:29:45.324 [2024-11-20 07:44:03.329907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.324 [2024-11-20 07:44:03.330006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.324 [2024-11-20 07:44:03.330018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.324 [2024-11-20 07:44:03.330025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.324 [2024-11-20 07:44:03.330031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.324 [2024-11-20 07:44:03.330045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.324 qpair failed and we were unable to recover it. 00:29:45.324 [2024-11-20 07:44:03.339995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.324 [2024-11-20 07:44:03.340043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.324 [2024-11-20 07:44:03.340056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.324 [2024-11-20 07:44:03.340063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.324 [2024-11-20 07:44:03.340069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.324 [2024-11-20 07:44:03.340083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.324 qpair failed and we were unable to recover it. 00:29:45.324 [2024-11-20 07:44:03.350011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.324 [2024-11-20 07:44:03.350054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.324 [2024-11-20 07:44:03.350067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.324 [2024-11-20 07:44:03.350074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.324 [2024-11-20 07:44:03.350080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.324 [2024-11-20 07:44:03.350094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.324 qpair failed and we were unable to recover it. 00:29:45.324 [2024-11-20 07:44:03.360030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.324 [2024-11-20 07:44:03.360081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.324 [2024-11-20 07:44:03.360093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.324 [2024-11-20 07:44:03.360101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.324 [2024-11-20 07:44:03.360110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.324 [2024-11-20 07:44:03.360124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.324 qpair failed and we were unable to recover it. 00:29:45.324 [2024-11-20 07:44:03.370062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.324 [2024-11-20 07:44:03.370117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.324 [2024-11-20 07:44:03.370131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.324 [2024-11-20 07:44:03.370138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.324 [2024-11-20 07:44:03.370144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.324 [2024-11-20 07:44:03.370159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.324 qpair failed and we were unable to recover it. 00:29:45.324 [2024-11-20 07:44:03.380146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.324 [2024-11-20 07:44:03.380192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.324 [2024-11-20 07:44:03.380205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.324 [2024-11-20 07:44:03.380212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.324 [2024-11-20 07:44:03.380218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.324 [2024-11-20 07:44:03.380232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.324 qpair failed and we were unable to recover it. 00:29:45.324 [2024-11-20 07:44:03.389988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.324 [2024-11-20 07:44:03.390043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.324 [2024-11-20 07:44:03.390055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.324 [2024-11-20 07:44:03.390062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.324 [2024-11-20 07:44:03.390068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.324 [2024-11-20 07:44:03.390083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.324 qpair failed and we were unable to recover it. 00:29:45.324 [2024-11-20 07:44:03.400153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.324 [2024-11-20 07:44:03.400201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.324 [2024-11-20 07:44:03.400214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.324 [2024-11-20 07:44:03.400221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.324 [2024-11-20 07:44:03.400227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.324 [2024-11-20 07:44:03.400241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.324 qpair failed and we were unable to recover it. 00:29:45.324 [2024-11-20 07:44:03.410178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.324 [2024-11-20 07:44:03.410225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.324 [2024-11-20 07:44:03.410238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.324 [2024-11-20 07:44:03.410245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.324 [2024-11-20 07:44:03.410251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.324 [2024-11-20 07:44:03.410265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.324 qpair failed and we were unable to recover it. 00:29:45.324 [2024-11-20 07:44:03.420088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.324 [2024-11-20 07:44:03.420146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.324 [2024-11-20 07:44:03.420159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.324 [2024-11-20 07:44:03.420166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.324 [2024-11-20 07:44:03.420172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.324 [2024-11-20 07:44:03.420186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.324 qpair failed and we were unable to recover it. 00:29:45.324 [2024-11-20 07:44:03.430194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.324 [2024-11-20 07:44:03.430238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.324 [2024-11-20 07:44:03.430251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.324 [2024-11-20 07:44:03.430258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.324 [2024-11-20 07:44:03.430265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.324 [2024-11-20 07:44:03.430279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.324 qpair failed and we were unable to recover it. 00:29:45.324 [2024-11-20 07:44:03.440280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.324 [2024-11-20 07:44:03.440329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.324 [2024-11-20 07:44:03.440342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.324 [2024-11-20 07:44:03.440349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.324 [2024-11-20 07:44:03.440355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.324 [2024-11-20 07:44:03.440369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.324 qpair failed and we were unable to recover it. 00:29:45.325 [2024-11-20 07:44:03.450250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.325 [2024-11-20 07:44:03.450291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.325 [2024-11-20 07:44:03.450307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.325 [2024-11-20 07:44:03.450314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.325 [2024-11-20 07:44:03.450320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.325 [2024-11-20 07:44:03.450334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.325 qpair failed and we were unable to recover it. 00:29:45.325 [2024-11-20 07:44:03.460173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.325 [2024-11-20 07:44:03.460219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.325 [2024-11-20 07:44:03.460232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.325 [2024-11-20 07:44:03.460239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.325 [2024-11-20 07:44:03.460245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.325 [2024-11-20 07:44:03.460259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.325 qpair failed and we were unable to recover it. 00:29:45.325 [2024-11-20 07:44:03.470356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.325 [2024-11-20 07:44:03.470410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.325 [2024-11-20 07:44:03.470423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.325 [2024-11-20 07:44:03.470430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.325 [2024-11-20 07:44:03.470436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.325 [2024-11-20 07:44:03.470450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.325 qpair failed and we were unable to recover it. 00:29:45.325 [2024-11-20 07:44:03.480261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.325 [2024-11-20 07:44:03.480308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.325 [2024-11-20 07:44:03.480321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.325 [2024-11-20 07:44:03.480328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.325 [2024-11-20 07:44:03.480334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.325 [2024-11-20 07:44:03.480348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.325 qpair failed and we were unable to recover it. 00:29:45.325 [2024-11-20 07:44:03.490405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.325 [2024-11-20 07:44:03.490486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.325 [2024-11-20 07:44:03.490499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.325 [2024-11-20 07:44:03.490506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.325 [2024-11-20 07:44:03.490516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.325 [2024-11-20 07:44:03.490530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.325 qpair failed and we were unable to recover it. 00:29:45.325 [2024-11-20 07:44:03.500419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.325 [2024-11-20 07:44:03.500466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.325 [2024-11-20 07:44:03.500479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.325 [2024-11-20 07:44:03.500486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.325 [2024-11-20 07:44:03.500492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.325 [2024-11-20 07:44:03.500506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.325 qpair failed and we were unable to recover it. 00:29:45.325 [2024-11-20 07:44:03.510442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.325 [2024-11-20 07:44:03.510490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.325 [2024-11-20 07:44:03.510503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.325 [2024-11-20 07:44:03.510509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.325 [2024-11-20 07:44:03.510515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.325 [2024-11-20 07:44:03.510530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.325 qpair failed and we were unable to recover it. 00:29:45.325 [2024-11-20 07:44:03.520504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.325 [2024-11-20 07:44:03.520562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.325 [2024-11-20 07:44:03.520575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.325 [2024-11-20 07:44:03.520582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.325 [2024-11-20 07:44:03.520588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.325 [2024-11-20 07:44:03.520602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.325 qpair failed and we were unable to recover it. 00:29:45.588 [2024-11-20 07:44:03.530366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.588 [2024-11-20 07:44:03.530412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.588 [2024-11-20 07:44:03.530425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.588 [2024-11-20 07:44:03.530432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.588 [2024-11-20 07:44:03.530438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.588 [2024-11-20 07:44:03.530452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.588 qpair failed and we were unable to recover it. 00:29:45.588 [2024-11-20 07:44:03.540523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.588 [2024-11-20 07:44:03.540617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.588 [2024-11-20 07:44:03.540631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.588 [2024-11-20 07:44:03.540638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.588 [2024-11-20 07:44:03.540644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.588 [2024-11-20 07:44:03.540662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.588 qpair failed and we were unable to recover it. 00:29:45.588 [2024-11-20 07:44:03.550635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.588 [2024-11-20 07:44:03.550691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.588 [2024-11-20 07:44:03.550705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.588 [2024-11-20 07:44:03.550711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.588 [2024-11-20 07:44:03.550718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.588 [2024-11-20 07:44:03.550732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.588 qpair failed and we were unable to recover it. 00:29:45.588 [2024-11-20 07:44:03.560648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.588 [2024-11-20 07:44:03.560723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.588 [2024-11-20 07:44:03.560736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.588 [2024-11-20 07:44:03.560743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.588 [2024-11-20 07:44:03.560762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.588 [2024-11-20 07:44:03.560777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.588 qpair failed and we were unable to recover it. 00:29:45.588 [2024-11-20 07:44:03.570592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.588 [2024-11-20 07:44:03.570634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.588 [2024-11-20 07:44:03.570647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.588 [2024-11-20 07:44:03.570654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.588 [2024-11-20 07:44:03.570660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.588 [2024-11-20 07:44:03.570675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.588 qpair failed and we were unable to recover it. 00:29:45.588 [2024-11-20 07:44:03.580643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.588 [2024-11-20 07:44:03.580691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.588 [2024-11-20 07:44:03.580704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.588 [2024-11-20 07:44:03.580711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.588 [2024-11-20 07:44:03.580717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.588 [2024-11-20 07:44:03.580731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.588 qpair failed and we were unable to recover it. 00:29:45.588 [2024-11-20 07:44:03.590666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.588 [2024-11-20 07:44:03.590714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.588 [2024-11-20 07:44:03.590727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.588 [2024-11-20 07:44:03.590734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.588 [2024-11-20 07:44:03.590740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.588 [2024-11-20 07:44:03.590758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.588 qpair failed and we were unable to recover it. 00:29:45.588 [2024-11-20 07:44:03.600721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.588 [2024-11-20 07:44:03.600769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.588 [2024-11-20 07:44:03.600782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.588 [2024-11-20 07:44:03.600789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.588 [2024-11-20 07:44:03.600795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.588 [2024-11-20 07:44:03.600809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.588 qpair failed and we were unable to recover it. 00:29:45.588 [2024-11-20 07:44:03.610706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.588 [2024-11-20 07:44:03.610749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.588 [2024-11-20 07:44:03.610762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.588 [2024-11-20 07:44:03.610769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.588 [2024-11-20 07:44:03.610775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.588 [2024-11-20 07:44:03.610789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.588 qpair failed and we were unable to recover it. 00:29:45.588 [2024-11-20 07:44:03.620713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.588 [2024-11-20 07:44:03.620762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.588 [2024-11-20 07:44:03.620775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.588 [2024-11-20 07:44:03.620789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.588 [2024-11-20 07:44:03.620795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.588 [2024-11-20 07:44:03.620809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.588 qpair failed and we were unable to recover it. 00:29:45.589 [2024-11-20 07:44:03.630778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.589 [2024-11-20 07:44:03.630823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.589 [2024-11-20 07:44:03.630836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.589 [2024-11-20 07:44:03.630843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.589 [2024-11-20 07:44:03.630849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.589 [2024-11-20 07:44:03.630863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.589 qpair failed and we were unable to recover it. 00:29:45.589 [2024-11-20 07:44:03.640827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.589 [2024-11-20 07:44:03.640871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.589 [2024-11-20 07:44:03.640883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.589 [2024-11-20 07:44:03.640890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.589 [2024-11-20 07:44:03.640896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.589 [2024-11-20 07:44:03.640911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.589 qpair failed and we were unable to recover it. 00:29:45.589 [2024-11-20 07:44:03.650821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.589 [2024-11-20 07:44:03.650865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.589 [2024-11-20 07:44:03.650877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.589 [2024-11-20 07:44:03.650884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.589 [2024-11-20 07:44:03.650890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.589 [2024-11-20 07:44:03.650904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.589 qpair failed and we were unable to recover it. 00:29:45.589 [2024-11-20 07:44:03.660823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.589 [2024-11-20 07:44:03.660871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.589 [2024-11-20 07:44:03.660884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.589 [2024-11-20 07:44:03.660891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.589 [2024-11-20 07:44:03.660897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.589 [2024-11-20 07:44:03.660915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.589 qpair failed and we were unable to recover it. 00:29:45.589 [2024-11-20 07:44:03.670911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.589 [2024-11-20 07:44:03.670966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.589 [2024-11-20 07:44:03.670979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.589 [2024-11-20 07:44:03.670986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.589 [2024-11-20 07:44:03.670992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.589 [2024-11-20 07:44:03.671006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.589 qpair failed and we were unable to recover it. 00:29:45.589 [2024-11-20 07:44:03.680825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.589 [2024-11-20 07:44:03.680930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.589 [2024-11-20 07:44:03.680945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.589 [2024-11-20 07:44:03.680952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.589 [2024-11-20 07:44:03.680959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.589 [2024-11-20 07:44:03.680977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.589 qpair failed and we were unable to recover it. 00:29:45.589 [2024-11-20 07:44:03.690933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.589 [2024-11-20 07:44:03.691017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.589 [2024-11-20 07:44:03.691030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.589 [2024-11-20 07:44:03.691037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.589 [2024-11-20 07:44:03.691044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.589 [2024-11-20 07:44:03.691058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.589 qpair failed and we were unable to recover it. 00:29:45.589 [2024-11-20 07:44:03.700984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.589 [2024-11-20 07:44:03.701080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.589 [2024-11-20 07:44:03.701093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.589 [2024-11-20 07:44:03.701100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.589 [2024-11-20 07:44:03.701106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.589 [2024-11-20 07:44:03.701120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.589 qpair failed and we were unable to recover it. 00:29:45.589 [2024-11-20 07:44:03.710867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.589 [2024-11-20 07:44:03.710916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.589 [2024-11-20 07:44:03.710929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.589 [2024-11-20 07:44:03.710936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.589 [2024-11-20 07:44:03.710942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.589 [2024-11-20 07:44:03.710956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.589 qpair failed and we were unable to recover it. 00:29:45.589 [2024-11-20 07:44:03.720929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.589 [2024-11-20 07:44:03.720976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.589 [2024-11-20 07:44:03.720988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.589 [2024-11-20 07:44:03.720995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.589 [2024-11-20 07:44:03.721001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.589 [2024-11-20 07:44:03.721015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.589 qpair failed and we were unable to recover it. 00:29:45.589 [2024-11-20 07:44:03.731044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.589 [2024-11-20 07:44:03.731093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.589 [2024-11-20 07:44:03.731107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.589 [2024-11-20 07:44:03.731114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.589 [2024-11-20 07:44:03.731121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.589 [2024-11-20 07:44:03.731135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.589 qpair failed and we were unable to recover it. 00:29:45.589 [2024-11-20 07:44:03.740983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.589 [2024-11-20 07:44:03.741036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.589 [2024-11-20 07:44:03.741049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.589 [2024-11-20 07:44:03.741056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.589 [2024-11-20 07:44:03.741062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.589 [2024-11-20 07:44:03.741076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.589 qpair failed and we were unable to recover it. 00:29:45.589 [2024-11-20 07:44:03.751103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.589 [2024-11-20 07:44:03.751149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.589 [2024-11-20 07:44:03.751162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.589 [2024-11-20 07:44:03.751172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.589 [2024-11-20 07:44:03.751178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.589 [2024-11-20 07:44:03.751192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.589 qpair failed and we were unable to recover it. 00:29:45.590 [2024-11-20 07:44:03.761131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.590 [2024-11-20 07:44:03.761176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.590 [2024-11-20 07:44:03.761189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.590 [2024-11-20 07:44:03.761196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.590 [2024-11-20 07:44:03.761202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.590 [2024-11-20 07:44:03.761216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.590 qpair failed and we were unable to recover it. 00:29:45.590 [2024-11-20 07:44:03.771154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.590 [2024-11-20 07:44:03.771198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.590 [2024-11-20 07:44:03.771211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.590 [2024-11-20 07:44:03.771218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.590 [2024-11-20 07:44:03.771224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.590 [2024-11-20 07:44:03.771238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.590 qpair failed and we were unable to recover it. 00:29:45.590 [2024-11-20 07:44:03.781191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.590 [2024-11-20 07:44:03.781236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.590 [2024-11-20 07:44:03.781248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.590 [2024-11-20 07:44:03.781255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.590 [2024-11-20 07:44:03.781261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.590 [2024-11-20 07:44:03.781275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.590 qpair failed and we were unable to recover it. 00:29:45.590 [2024-11-20 07:44:03.791217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.590 [2024-11-20 07:44:03.791265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.590 [2024-11-20 07:44:03.791277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.590 [2024-11-20 07:44:03.791284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.590 [2024-11-20 07:44:03.791290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.590 [2024-11-20 07:44:03.791308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.590 qpair failed and we were unable to recover it. 00:29:45.852 [2024-11-20 07:44:03.801227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.852 [2024-11-20 07:44:03.801271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.852 [2024-11-20 07:44:03.801283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.852 [2024-11-20 07:44:03.801290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.852 [2024-11-20 07:44:03.801296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.852 [2024-11-20 07:44:03.801310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.852 qpair failed and we were unable to recover it. 00:29:45.852 [2024-11-20 07:44:03.811283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.852 [2024-11-20 07:44:03.811329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.852 [2024-11-20 07:44:03.811341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.852 [2024-11-20 07:44:03.811349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.852 [2024-11-20 07:44:03.811355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.852 [2024-11-20 07:44:03.811368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.852 qpair failed and we were unable to recover it. 00:29:45.852 [2024-11-20 07:44:03.821274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.852 [2024-11-20 07:44:03.821319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.852 [2024-11-20 07:44:03.821332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.852 [2024-11-20 07:44:03.821339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.852 [2024-11-20 07:44:03.821345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.852 [2024-11-20 07:44:03.821361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.852 qpair failed and we were unable to recover it. 00:29:45.852 [2024-11-20 07:44:03.831291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.852 [2024-11-20 07:44:03.831335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.852 [2024-11-20 07:44:03.831348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.852 [2024-11-20 07:44:03.831355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.852 [2024-11-20 07:44:03.831361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.852 [2024-11-20 07:44:03.831375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.852 qpair failed and we were unable to recover it. 00:29:45.852 [2024-11-20 07:44:03.841381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.852 [2024-11-20 07:44:03.841427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.852 [2024-11-20 07:44:03.841442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.852 [2024-11-20 07:44:03.841449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.852 [2024-11-20 07:44:03.841455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.852 [2024-11-20 07:44:03.841469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.852 qpair failed and we were unable to recover it. 00:29:45.852 [2024-11-20 07:44:03.851355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.852 [2024-11-20 07:44:03.851397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.852 [2024-11-20 07:44:03.851410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.852 [2024-11-20 07:44:03.851417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.852 [2024-11-20 07:44:03.851424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.852 [2024-11-20 07:44:03.851437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.852 qpair failed and we were unable to recover it. 00:29:45.852 [2024-11-20 07:44:03.861383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.852 [2024-11-20 07:44:03.861474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.852 [2024-11-20 07:44:03.861487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.852 [2024-11-20 07:44:03.861494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.852 [2024-11-20 07:44:03.861500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.852 [2024-11-20 07:44:03.861514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.852 qpair failed and we were unable to recover it. 00:29:45.852 [2024-11-20 07:44:03.871413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.852 [2024-11-20 07:44:03.871463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.852 [2024-11-20 07:44:03.871487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.852 [2024-11-20 07:44:03.871495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.852 [2024-11-20 07:44:03.871502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.852 [2024-11-20 07:44:03.871522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.852 qpair failed and we were unable to recover it. 00:29:45.852 [2024-11-20 07:44:03.881334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.852 [2024-11-20 07:44:03.881412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.852 [2024-11-20 07:44:03.881431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.852 [2024-11-20 07:44:03.881439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.852 [2024-11-20 07:44:03.881445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.852 [2024-11-20 07:44:03.881461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.852 qpair failed and we were unable to recover it. 00:29:45.852 [2024-11-20 07:44:03.891440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.852 [2024-11-20 07:44:03.891495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.852 [2024-11-20 07:44:03.891509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.852 [2024-11-20 07:44:03.891516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.852 [2024-11-20 07:44:03.891522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.852 [2024-11-20 07:44:03.891537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.852 qpair failed and we were unable to recover it. 00:29:45.852 [2024-11-20 07:44:03.901483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.852 [2024-11-20 07:44:03.901530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.852 [2024-11-20 07:44:03.901555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.852 [2024-11-20 07:44:03.901564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.852 [2024-11-20 07:44:03.901571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.853 [2024-11-20 07:44:03.901590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.853 qpair failed and we were unable to recover it. 00:29:45.853 [2024-11-20 07:44:03.911521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.853 [2024-11-20 07:44:03.911571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.853 [2024-11-20 07:44:03.911596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.853 [2024-11-20 07:44:03.911604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.853 [2024-11-20 07:44:03.911612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.853 [2024-11-20 07:44:03.911631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.853 qpair failed and we were unable to recover it. 00:29:45.853 [2024-11-20 07:44:03.921448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.853 [2024-11-20 07:44:03.921495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.853 [2024-11-20 07:44:03.921510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.853 [2024-11-20 07:44:03.921517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.853 [2024-11-20 07:44:03.921528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.853 [2024-11-20 07:44:03.921544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.853 qpair failed and we were unable to recover it. 00:29:45.853 [2024-11-20 07:44:03.931571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.853 [2024-11-20 07:44:03.931618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.853 [2024-11-20 07:44:03.931632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.853 [2024-11-20 07:44:03.931639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.853 [2024-11-20 07:44:03.931645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.853 [2024-11-20 07:44:03.931660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.853 qpair failed and we were unable to recover it. 00:29:45.853 [2024-11-20 07:44:03.941577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.853 [2024-11-20 07:44:03.941621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.853 [2024-11-20 07:44:03.941634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.853 [2024-11-20 07:44:03.941641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.853 [2024-11-20 07:44:03.941648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.853 [2024-11-20 07:44:03.941662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.853 qpair failed and we were unable to recover it. 00:29:45.853 [2024-11-20 07:44:03.951630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.853 [2024-11-20 07:44:03.951682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.853 [2024-11-20 07:44:03.951696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.853 [2024-11-20 07:44:03.951703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.853 [2024-11-20 07:44:03.951709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.853 [2024-11-20 07:44:03.951723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.853 qpair failed and we were unable to recover it. 00:29:45.853 [2024-11-20 07:44:03.961703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.853 [2024-11-20 07:44:03.961753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.853 [2024-11-20 07:44:03.961767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.853 [2024-11-20 07:44:03.961774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.853 [2024-11-20 07:44:03.961780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.853 [2024-11-20 07:44:03.961795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.853 qpair failed and we were unable to recover it. 00:29:45.853 [2024-11-20 07:44:03.971695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.853 [2024-11-20 07:44:03.971736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.853 [2024-11-20 07:44:03.971754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.853 [2024-11-20 07:44:03.971762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.853 [2024-11-20 07:44:03.971768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.853 [2024-11-20 07:44:03.971782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.853 qpair failed and we were unable to recover it. 00:29:45.853 [2024-11-20 07:44:03.981719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.853 [2024-11-20 07:44:03.981765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.853 [2024-11-20 07:44:03.981778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.853 [2024-11-20 07:44:03.981785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.853 [2024-11-20 07:44:03.981792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.853 [2024-11-20 07:44:03.981806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.853 qpair failed and we were unable to recover it. 00:29:45.853 [2024-11-20 07:44:03.991764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.853 [2024-11-20 07:44:03.991809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.853 [2024-11-20 07:44:03.991823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.853 [2024-11-20 07:44:03.991830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.853 [2024-11-20 07:44:03.991837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.853 [2024-11-20 07:44:03.991851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.853 qpair failed and we were unable to recover it. 00:29:45.853 [2024-11-20 07:44:04.001823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.853 [2024-11-20 07:44:04.001906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.853 [2024-11-20 07:44:04.001919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.853 [2024-11-20 07:44:04.001926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.853 [2024-11-20 07:44:04.001933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.853 [2024-11-20 07:44:04.001947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.853 qpair failed and we were unable to recover it. 00:29:45.853 [2024-11-20 07:44:04.011804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.853 [2024-11-20 07:44:04.011847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.853 [2024-11-20 07:44:04.011865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.853 [2024-11-20 07:44:04.011873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.853 [2024-11-20 07:44:04.011882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.853 [2024-11-20 07:44:04.011897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.853 qpair failed and we were unable to recover it. 00:29:45.853 [2024-11-20 07:44:04.021836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.853 [2024-11-20 07:44:04.021884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.853 [2024-11-20 07:44:04.021898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.853 [2024-11-20 07:44:04.021905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.853 [2024-11-20 07:44:04.021911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.853 [2024-11-20 07:44:04.021926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.853 qpair failed and we were unable to recover it. 00:29:45.853 [2024-11-20 07:44:04.031869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.853 [2024-11-20 07:44:04.031917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.853 [2024-11-20 07:44:04.031930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.853 [2024-11-20 07:44:04.031937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.854 [2024-11-20 07:44:04.031943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.854 [2024-11-20 07:44:04.031958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.854 qpair failed and we were unable to recover it. 00:29:45.854 [2024-11-20 07:44:04.041951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.854 [2024-11-20 07:44:04.042029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.854 [2024-11-20 07:44:04.042042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.854 [2024-11-20 07:44:04.042050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.854 [2024-11-20 07:44:04.042056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.854 [2024-11-20 07:44:04.042070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.854 qpair failed and we were unable to recover it. 00:29:45.854 [2024-11-20 07:44:04.051885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.854 [2024-11-20 07:44:04.051931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.854 [2024-11-20 07:44:04.051943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.854 [2024-11-20 07:44:04.051950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.854 [2024-11-20 07:44:04.051960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:45.854 [2024-11-20 07:44:04.051975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.854 qpair failed and we were unable to recover it. 00:29:46.116 [2024-11-20 07:44:04.061916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.116 [2024-11-20 07:44:04.061966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.116 [2024-11-20 07:44:04.061979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.116 [2024-11-20 07:44:04.061986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.116 [2024-11-20 07:44:04.061992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.116 [2024-11-20 07:44:04.062007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.116 qpair failed and we were unable to recover it. 00:29:46.116 [2024-11-20 07:44:04.071954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.116 [2024-11-20 07:44:04.072006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.116 [2024-11-20 07:44:04.072020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.116 [2024-11-20 07:44:04.072027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.116 [2024-11-20 07:44:04.072034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.116 [2024-11-20 07:44:04.072049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.116 qpair failed and we were unable to recover it. 00:29:46.116 [2024-11-20 07:44:04.082067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.116 [2024-11-20 07:44:04.082115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.116 [2024-11-20 07:44:04.082128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.116 [2024-11-20 07:44:04.082135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.116 [2024-11-20 07:44:04.082141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.116 [2024-11-20 07:44:04.082156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.116 qpair failed and we were unable to recover it. 00:29:46.116 [2024-11-20 07:44:04.092001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.116 [2024-11-20 07:44:04.092067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.116 [2024-11-20 07:44:04.092081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.116 [2024-11-20 07:44:04.092089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.116 [2024-11-20 07:44:04.092095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.116 [2024-11-20 07:44:04.092113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.116 qpair failed and we were unable to recover it. 00:29:46.116 [2024-11-20 07:44:04.101909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.116 [2024-11-20 07:44:04.101956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.116 [2024-11-20 07:44:04.101970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.116 [2024-11-20 07:44:04.101977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.116 [2024-11-20 07:44:04.101983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.116 [2024-11-20 07:44:04.102004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.116 qpair failed and we were unable to recover it. 00:29:46.116 [2024-11-20 07:44:04.112111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.116 [2024-11-20 07:44:04.112157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.116 [2024-11-20 07:44:04.112170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.116 [2024-11-20 07:44:04.112177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.116 [2024-11-20 07:44:04.112183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.116 [2024-11-20 07:44:04.112198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.116 qpair failed and we were unable to recover it. 00:29:46.116 [2024-11-20 07:44:04.122130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.116 [2024-11-20 07:44:04.122175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.116 [2024-11-20 07:44:04.122188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.116 [2024-11-20 07:44:04.122195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.116 [2024-11-20 07:44:04.122201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.116 [2024-11-20 07:44:04.122216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.116 qpair failed and we were unable to recover it. 00:29:46.116 [2024-11-20 07:44:04.132115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.116 [2024-11-20 07:44:04.132154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.116 [2024-11-20 07:44:04.132167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.116 [2024-11-20 07:44:04.132174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.116 [2024-11-20 07:44:04.132180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.117 [2024-11-20 07:44:04.132194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.117 qpair failed and we were unable to recover it. 00:29:46.117 [2024-11-20 07:44:04.142135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.117 [2024-11-20 07:44:04.142182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.117 [2024-11-20 07:44:04.142195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.117 [2024-11-20 07:44:04.142202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.117 [2024-11-20 07:44:04.142208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.117 [2024-11-20 07:44:04.142222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.117 qpair failed and we were unable to recover it. 00:29:46.117 [2024-11-20 07:44:04.152187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.117 [2024-11-20 07:44:04.152232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.117 [2024-11-20 07:44:04.152245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.117 [2024-11-20 07:44:04.152252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.117 [2024-11-20 07:44:04.152258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.117 [2024-11-20 07:44:04.152272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.117 qpair failed and we were unable to recover it. 00:29:46.117 [2024-11-20 07:44:04.162254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.117 [2024-11-20 07:44:04.162297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.117 [2024-11-20 07:44:04.162311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.117 [2024-11-20 07:44:04.162318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.117 [2024-11-20 07:44:04.162324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.117 [2024-11-20 07:44:04.162338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.117 qpair failed and we were unable to recover it. 00:29:46.117 [2024-11-20 07:44:04.172197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.117 [2024-11-20 07:44:04.172238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.117 [2024-11-20 07:44:04.172251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.117 [2024-11-20 07:44:04.172258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.117 [2024-11-20 07:44:04.172264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.117 [2024-11-20 07:44:04.172278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.117 qpair failed and we were unable to recover it. 00:29:46.117 [2024-11-20 07:44:04.182257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.117 [2024-11-20 07:44:04.182303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.117 [2024-11-20 07:44:04.182316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.117 [2024-11-20 07:44:04.182326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.117 [2024-11-20 07:44:04.182333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.117 [2024-11-20 07:44:04.182347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.117 qpair failed and we were unable to recover it. 00:29:46.117 [2024-11-20 07:44:04.192330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.117 [2024-11-20 07:44:04.192430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.117 [2024-11-20 07:44:04.192443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.117 [2024-11-20 07:44:04.192450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.117 [2024-11-20 07:44:04.192457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.117 [2024-11-20 07:44:04.192471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.117 qpair failed and we were unable to recover it. 00:29:46.117 [2024-11-20 07:44:04.202364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.117 [2024-11-20 07:44:04.202410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.117 [2024-11-20 07:44:04.202423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.117 [2024-11-20 07:44:04.202430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.117 [2024-11-20 07:44:04.202437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.117 [2024-11-20 07:44:04.202451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.117 qpair failed and we were unable to recover it. 00:29:46.117 [2024-11-20 07:44:04.212309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.117 [2024-11-20 07:44:04.212353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.117 [2024-11-20 07:44:04.212367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.117 [2024-11-20 07:44:04.212374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.117 [2024-11-20 07:44:04.212380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.117 [2024-11-20 07:44:04.212395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.117 qpair failed and we were unable to recover it. 00:29:46.117 [2024-11-20 07:44:04.222374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.117 [2024-11-20 07:44:04.222427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.117 [2024-11-20 07:44:04.222441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.117 [2024-11-20 07:44:04.222448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.117 [2024-11-20 07:44:04.222454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.117 [2024-11-20 07:44:04.222472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.117 qpair failed and we were unable to recover it. 00:29:46.117 [2024-11-20 07:44:04.232411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.117 [2024-11-20 07:44:04.232464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.117 [2024-11-20 07:44:04.232477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.117 [2024-11-20 07:44:04.232485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.117 [2024-11-20 07:44:04.232492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.117 [2024-11-20 07:44:04.232507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.117 qpair failed and we were unable to recover it. 00:29:46.117 [2024-11-20 07:44:04.242452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.117 [2024-11-20 07:44:04.242497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.117 [2024-11-20 07:44:04.242510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.117 [2024-11-20 07:44:04.242517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.117 [2024-11-20 07:44:04.242523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.117 [2024-11-20 07:44:04.242537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.117 qpair failed and we were unable to recover it. 00:29:46.117 [2024-11-20 07:44:04.252430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.117 [2024-11-20 07:44:04.252469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.117 [2024-11-20 07:44:04.252482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.117 [2024-11-20 07:44:04.252489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.117 [2024-11-20 07:44:04.252496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.117 [2024-11-20 07:44:04.252510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.117 qpair failed and we were unable to recover it. 00:29:46.117 [2024-11-20 07:44:04.262480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.117 [2024-11-20 07:44:04.262532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.117 [2024-11-20 07:44:04.262545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.117 [2024-11-20 07:44:04.262552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.118 [2024-11-20 07:44:04.262559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.118 [2024-11-20 07:44:04.262574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.118 qpair failed and we were unable to recover it. 00:29:46.118 [2024-11-20 07:44:04.272395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.118 [2024-11-20 07:44:04.272449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.118 [2024-11-20 07:44:04.272463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.118 [2024-11-20 07:44:04.272470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.118 [2024-11-20 07:44:04.272476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.118 [2024-11-20 07:44:04.272497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.118 qpair failed and we were unable to recover it. 00:29:46.118 [2024-11-20 07:44:04.282432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.118 [2024-11-20 07:44:04.282479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.118 [2024-11-20 07:44:04.282492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.118 [2024-11-20 07:44:04.282499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.118 [2024-11-20 07:44:04.282506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.118 [2024-11-20 07:44:04.282520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.118 qpair failed and we were unable to recover it. 00:29:46.118 [2024-11-20 07:44:04.292533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.118 [2024-11-20 07:44:04.292580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.118 [2024-11-20 07:44:04.292593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.118 [2024-11-20 07:44:04.292600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.118 [2024-11-20 07:44:04.292607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.118 [2024-11-20 07:44:04.292621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.118 qpair failed and we were unable to recover it. 00:29:46.118 [2024-11-20 07:44:04.302597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.118 [2024-11-20 07:44:04.302642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.118 [2024-11-20 07:44:04.302655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.118 [2024-11-20 07:44:04.302662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.118 [2024-11-20 07:44:04.302668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.118 [2024-11-20 07:44:04.302683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.118 qpair failed and we were unable to recover it. 00:29:46.118 [2024-11-20 07:44:04.312636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.118 [2024-11-20 07:44:04.312684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.118 [2024-11-20 07:44:04.312696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.118 [2024-11-20 07:44:04.312707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.118 [2024-11-20 07:44:04.312713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.118 [2024-11-20 07:44:04.312727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.118 qpair failed and we were unable to recover it. 00:29:46.380 [2024-11-20 07:44:04.322681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.380 [2024-11-20 07:44:04.322774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.380 [2024-11-20 07:44:04.322787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.380 [2024-11-20 07:44:04.322795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.380 [2024-11-20 07:44:04.322801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.380 [2024-11-20 07:44:04.322816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.380 qpair failed and we were unable to recover it. 00:29:46.380 [2024-11-20 07:44:04.332666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.380 [2024-11-20 07:44:04.332712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.380 [2024-11-20 07:44:04.332725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.380 [2024-11-20 07:44:04.332732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.380 [2024-11-20 07:44:04.332739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.380 [2024-11-20 07:44:04.332756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.380 qpair failed and we were unable to recover it. 00:29:46.380 [2024-11-20 07:44:04.342693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.380 [2024-11-20 07:44:04.342736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.380 [2024-11-20 07:44:04.342753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.380 [2024-11-20 07:44:04.342760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.380 [2024-11-20 07:44:04.342766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.380 [2024-11-20 07:44:04.342781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.380 qpair failed and we were unable to recover it. 00:29:46.380 [2024-11-20 07:44:04.352758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.380 [2024-11-20 07:44:04.352808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.380 [2024-11-20 07:44:04.352821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.380 [2024-11-20 07:44:04.352828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.380 [2024-11-20 07:44:04.352834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.380 [2024-11-20 07:44:04.352852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.380 qpair failed and we were unable to recover it. 00:29:46.380 [2024-11-20 07:44:04.362789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.380 [2024-11-20 07:44:04.362835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.380 [2024-11-20 07:44:04.362848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.380 [2024-11-20 07:44:04.362855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.380 [2024-11-20 07:44:04.362862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.380 [2024-11-20 07:44:04.362876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.380 qpair failed and we were unable to recover it. 00:29:46.380 [2024-11-20 07:44:04.372780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.380 [2024-11-20 07:44:04.372824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.380 [2024-11-20 07:44:04.372837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.380 [2024-11-20 07:44:04.372844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.380 [2024-11-20 07:44:04.372850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.380 [2024-11-20 07:44:04.372864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.380 qpair failed and we were unable to recover it. 00:29:46.380 [2024-11-20 07:44:04.382809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.380 [2024-11-20 07:44:04.382858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.380 [2024-11-20 07:44:04.382871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.380 [2024-11-20 07:44:04.382878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.380 [2024-11-20 07:44:04.382884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.380 [2024-11-20 07:44:04.382899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.381 qpair failed and we were unable to recover it. 00:29:46.381 [2024-11-20 07:44:04.392847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.381 [2024-11-20 07:44:04.392898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.381 [2024-11-20 07:44:04.392911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.381 [2024-11-20 07:44:04.392918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.381 [2024-11-20 07:44:04.392924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.381 [2024-11-20 07:44:04.392938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.381 qpair failed and we were unable to recover it. 00:29:46.381 [2024-11-20 07:44:04.402783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.381 [2024-11-20 07:44:04.402834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.381 [2024-11-20 07:44:04.402847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.381 [2024-11-20 07:44:04.402854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.381 [2024-11-20 07:44:04.402860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.381 [2024-11-20 07:44:04.402875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.381 qpair failed and we were unable to recover it. 00:29:46.381 [2024-11-20 07:44:04.412867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.381 [2024-11-20 07:44:04.412909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.381 [2024-11-20 07:44:04.412922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.381 [2024-11-20 07:44:04.412929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.381 [2024-11-20 07:44:04.412936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.381 [2024-11-20 07:44:04.412950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.381 qpair failed and we were unable to recover it. 00:29:46.381 [2024-11-20 07:44:04.422785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.381 [2024-11-20 07:44:04.422831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.381 [2024-11-20 07:44:04.422844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.381 [2024-11-20 07:44:04.422851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.381 [2024-11-20 07:44:04.422857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.381 [2024-11-20 07:44:04.422871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.381 qpair failed and we were unable to recover it. 00:29:46.381 [2024-11-20 07:44:04.432812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.381 [2024-11-20 07:44:04.432862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.381 [2024-11-20 07:44:04.432875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.381 [2024-11-20 07:44:04.432882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.381 [2024-11-20 07:44:04.432888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.381 [2024-11-20 07:44:04.432902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.381 qpair failed and we were unable to recover it. 00:29:46.381 [2024-11-20 07:44:04.442911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.381 [2024-11-20 07:44:04.442964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.381 [2024-11-20 07:44:04.442980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.381 [2024-11-20 07:44:04.442987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.381 [2024-11-20 07:44:04.442993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.381 [2024-11-20 07:44:04.443007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.381 qpair failed and we were unable to recover it. 00:29:46.381 [2024-11-20 07:44:04.453011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.381 [2024-11-20 07:44:04.453103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.381 [2024-11-20 07:44:04.453116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.381 [2024-11-20 07:44:04.453124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.381 [2024-11-20 07:44:04.453130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.381 [2024-11-20 07:44:04.453144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.381 qpair failed and we were unable to recover it. 00:29:46.381 [2024-11-20 07:44:04.463041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.381 [2024-11-20 07:44:04.463087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.381 [2024-11-20 07:44:04.463100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.381 [2024-11-20 07:44:04.463107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.381 [2024-11-20 07:44:04.463113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.381 [2024-11-20 07:44:04.463127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.381 qpair failed and we were unable to recover it. 00:29:46.381 [2024-11-20 07:44:04.472998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.381 [2024-11-20 07:44:04.473048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.381 [2024-11-20 07:44:04.473061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.381 [2024-11-20 07:44:04.473068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.381 [2024-11-20 07:44:04.473074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.381 [2024-11-20 07:44:04.473089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.381 qpair failed and we were unable to recover it. 00:29:46.381 [2024-11-20 07:44:04.483114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.381 [2024-11-20 07:44:04.483173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.381 [2024-11-20 07:44:04.483186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.381 [2024-11-20 07:44:04.483193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.381 [2024-11-20 07:44:04.483202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.381 [2024-11-20 07:44:04.483217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.381 qpair failed and we were unable to recover it. 00:29:46.381 [2024-11-20 07:44:04.493075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.381 [2024-11-20 07:44:04.493119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.381 [2024-11-20 07:44:04.493132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.381 [2024-11-20 07:44:04.493139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.381 [2024-11-20 07:44:04.493145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.381 [2024-11-20 07:44:04.493160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.381 qpair failed and we were unable to recover it. 00:29:46.381 [2024-11-20 07:44:04.503033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.381 [2024-11-20 07:44:04.503092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.381 [2024-11-20 07:44:04.503105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.381 [2024-11-20 07:44:04.503112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.381 [2024-11-20 07:44:04.503118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.381 [2024-11-20 07:44:04.503132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.381 qpair failed and we were unable to recover it. 00:29:46.381 [2024-11-20 07:44:04.513140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.381 [2024-11-20 07:44:04.513188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.381 [2024-11-20 07:44:04.513201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.381 [2024-11-20 07:44:04.513208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.381 [2024-11-20 07:44:04.513214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.381 [2024-11-20 07:44:04.513229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.382 qpair failed and we were unable to recover it. 00:29:46.382 [2024-11-20 07:44:04.523218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.382 [2024-11-20 07:44:04.523273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.382 [2024-11-20 07:44:04.523285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.382 [2024-11-20 07:44:04.523293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.382 [2024-11-20 07:44:04.523299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.382 [2024-11-20 07:44:04.523313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.382 qpair failed and we were unable to recover it. 00:29:46.382 [2024-11-20 07:44:04.533227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.382 [2024-11-20 07:44:04.533273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.382 [2024-11-20 07:44:04.533286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.382 [2024-11-20 07:44:04.533293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.382 [2024-11-20 07:44:04.533299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.382 [2024-11-20 07:44:04.533313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.382 qpair failed and we were unable to recover it. 00:29:46.382 [2024-11-20 07:44:04.543262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.382 [2024-11-20 07:44:04.543312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.382 [2024-11-20 07:44:04.543324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.382 [2024-11-20 07:44:04.543331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.382 [2024-11-20 07:44:04.543338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.382 [2024-11-20 07:44:04.543352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.382 qpair failed and we were unable to recover it. 00:29:46.382 [2024-11-20 07:44:04.553286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.382 [2024-11-20 07:44:04.553330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.382 [2024-11-20 07:44:04.553343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.382 [2024-11-20 07:44:04.553350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.382 [2024-11-20 07:44:04.553356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.382 [2024-11-20 07:44:04.553370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.382 qpair failed and we were unable to recover it. 00:29:46.382 [2024-11-20 07:44:04.563336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.382 [2024-11-20 07:44:04.563385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.382 [2024-11-20 07:44:04.563399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.382 [2024-11-20 07:44:04.563406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.382 [2024-11-20 07:44:04.563412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.382 [2024-11-20 07:44:04.563428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.382 qpair failed and we were unable to recover it. 00:29:46.382 [2024-11-20 07:44:04.573312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.382 [2024-11-20 07:44:04.573370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.382 [2024-11-20 07:44:04.573387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.382 [2024-11-20 07:44:04.573395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.382 [2024-11-20 07:44:04.573401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.382 [2024-11-20 07:44:04.573415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.382 qpair failed and we were unable to recover it. 00:29:46.382 [2024-11-20 07:44:04.583367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.382 [2024-11-20 07:44:04.583430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.382 [2024-11-20 07:44:04.583443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.382 [2024-11-20 07:44:04.583450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.382 [2024-11-20 07:44:04.583456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.382 [2024-11-20 07:44:04.583470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.382 qpair failed and we were unable to recover it. 00:29:46.644 [2024-11-20 07:44:04.593396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.644 [2024-11-20 07:44:04.593448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.644 [2024-11-20 07:44:04.593461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.644 [2024-11-20 07:44:04.593468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.644 [2024-11-20 07:44:04.593474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.644 [2024-11-20 07:44:04.593488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-11-20 07:44:04.603442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.644 [2024-11-20 07:44:04.603519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.644 [2024-11-20 07:44:04.603532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.644 [2024-11-20 07:44:04.603539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.644 [2024-11-20 07:44:04.603546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.644 [2024-11-20 07:44:04.603560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-11-20 07:44:04.613417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.644 [2024-11-20 07:44:04.613503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.644 [2024-11-20 07:44:04.613516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.644 [2024-11-20 07:44:04.613523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.644 [2024-11-20 07:44:04.613534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.644 [2024-11-20 07:44:04.613548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-11-20 07:44:04.623468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.644 [2024-11-20 07:44:04.623514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.644 [2024-11-20 07:44:04.623527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.644 [2024-11-20 07:44:04.623534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.644 [2024-11-20 07:44:04.623541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.644 [2024-11-20 07:44:04.623555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-11-20 07:44:04.633517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.644 [2024-11-20 07:44:04.633567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.644 [2024-11-20 07:44:04.633580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.644 [2024-11-20 07:44:04.633587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.644 [2024-11-20 07:44:04.633594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.644 [2024-11-20 07:44:04.633608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-11-20 07:44:04.643532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.644 [2024-11-20 07:44:04.643582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.644 [2024-11-20 07:44:04.643595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.644 [2024-11-20 07:44:04.643602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.644 [2024-11-20 07:44:04.643608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.644 [2024-11-20 07:44:04.643622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-11-20 07:44:04.653539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.644 [2024-11-20 07:44:04.653587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.644 [2024-11-20 07:44:04.653600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.644 [2024-11-20 07:44:04.653607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.644 [2024-11-20 07:44:04.653613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.644 [2024-11-20 07:44:04.653627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.644 qpair failed and we were unable to recover it. 00:29:46.644 [2024-11-20 07:44:04.663584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.644 [2024-11-20 07:44:04.663677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.644 [2024-11-20 07:44:04.663699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.644 [2024-11-20 07:44:04.663707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.644 [2024-11-20 07:44:04.663713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.645 [2024-11-20 07:44:04.663732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-11-20 07:44:04.673610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.645 [2024-11-20 07:44:04.673662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.645 [2024-11-20 07:44:04.673676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.645 [2024-11-20 07:44:04.673682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.645 [2024-11-20 07:44:04.673689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.645 [2024-11-20 07:44:04.673703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-11-20 07:44:04.683635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.645 [2024-11-20 07:44:04.683682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.645 [2024-11-20 07:44:04.683695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.645 [2024-11-20 07:44:04.683702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.645 [2024-11-20 07:44:04.683708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.645 [2024-11-20 07:44:04.683723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-11-20 07:44:04.693646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.645 [2024-11-20 07:44:04.693688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.645 [2024-11-20 07:44:04.693702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.645 [2024-11-20 07:44:04.693709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.645 [2024-11-20 07:44:04.693716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.645 [2024-11-20 07:44:04.693730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-11-20 07:44:04.703566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.645 [2024-11-20 07:44:04.703616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.645 [2024-11-20 07:44:04.703630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.645 [2024-11-20 07:44:04.703637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.645 [2024-11-20 07:44:04.703643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.645 [2024-11-20 07:44:04.703658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-11-20 07:44:04.713701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.645 [2024-11-20 07:44:04.713750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.645 [2024-11-20 07:44:04.713763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.645 [2024-11-20 07:44:04.713770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.645 [2024-11-20 07:44:04.713777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.645 [2024-11-20 07:44:04.713791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-11-20 07:44:04.723778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.645 [2024-11-20 07:44:04.723824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.645 [2024-11-20 07:44:04.723837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.645 [2024-11-20 07:44:04.723844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.645 [2024-11-20 07:44:04.723850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.645 [2024-11-20 07:44:04.723864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-11-20 07:44:04.733774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.645 [2024-11-20 07:44:04.733818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.645 [2024-11-20 07:44:04.733832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.645 [2024-11-20 07:44:04.733840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.645 [2024-11-20 07:44:04.733847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.645 [2024-11-20 07:44:04.733862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-11-20 07:44:04.743799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.645 [2024-11-20 07:44:04.743847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.645 [2024-11-20 07:44:04.743860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.645 [2024-11-20 07:44:04.743870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.645 [2024-11-20 07:44:04.743877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.645 [2024-11-20 07:44:04.743891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-11-20 07:44:04.753809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.645 [2024-11-20 07:44:04.753854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.645 [2024-11-20 07:44:04.753866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.645 [2024-11-20 07:44:04.753873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.645 [2024-11-20 07:44:04.753880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.645 [2024-11-20 07:44:04.753894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-11-20 07:44:04.763774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.645 [2024-11-20 07:44:04.763830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.645 [2024-11-20 07:44:04.763844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.645 [2024-11-20 07:44:04.763851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.645 [2024-11-20 07:44:04.763858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.645 [2024-11-20 07:44:04.763878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-11-20 07:44:04.773889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.645 [2024-11-20 07:44:04.773937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.645 [2024-11-20 07:44:04.773952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.645 [2024-11-20 07:44:04.773959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.645 [2024-11-20 07:44:04.773965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.645 [2024-11-20 07:44:04.773980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-11-20 07:44:04.783814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.645 [2024-11-20 07:44:04.783869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.645 [2024-11-20 07:44:04.783882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.645 [2024-11-20 07:44:04.783889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.645 [2024-11-20 07:44:04.783896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.645 [2024-11-20 07:44:04.783914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.645 qpair failed and we were unable to recover it. 00:29:46.645 [2024-11-20 07:44:04.793937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.645 [2024-11-20 07:44:04.794029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.645 [2024-11-20 07:44:04.794042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.645 [2024-11-20 07:44:04.794049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.646 [2024-11-20 07:44:04.794055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.646 [2024-11-20 07:44:04.794070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.646 qpair failed and we were unable to recover it. 00:29:46.646 [2024-11-20 07:44:04.804005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.646 [2024-11-20 07:44:04.804048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.646 [2024-11-20 07:44:04.804061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.646 [2024-11-20 07:44:04.804068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.646 [2024-11-20 07:44:04.804074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.646 [2024-11-20 07:44:04.804088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.646 qpair failed and we were unable to recover it. 00:29:46.646 [2024-11-20 07:44:04.813846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.646 [2024-11-20 07:44:04.813888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.646 [2024-11-20 07:44:04.813901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.646 [2024-11-20 07:44:04.813908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.646 [2024-11-20 07:44:04.813914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.646 [2024-11-20 07:44:04.813929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.646 qpair failed and we were unable to recover it. 00:29:46.646 [2024-11-20 07:44:04.824022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.646 [2024-11-20 07:44:04.824069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.646 [2024-11-20 07:44:04.824082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.646 [2024-11-20 07:44:04.824089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.646 [2024-11-20 07:44:04.824095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.646 [2024-11-20 07:44:04.824109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.646 qpair failed and we were unable to recover it. 00:29:46.646 [2024-11-20 07:44:04.834054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.646 [2024-11-20 07:44:04.834106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.646 [2024-11-20 07:44:04.834119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.646 [2024-11-20 07:44:04.834126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.646 [2024-11-20 07:44:04.834132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.646 [2024-11-20 07:44:04.834146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.646 qpair failed and we were unable to recover it. 00:29:46.646 [2024-11-20 07:44:04.843986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.646 [2024-11-20 07:44:04.844032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.646 [2024-11-20 07:44:04.844045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.646 [2024-11-20 07:44:04.844052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.646 [2024-11-20 07:44:04.844058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.646 [2024-11-20 07:44:04.844072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.646 qpair failed and we were unable to recover it. 00:29:46.907 [2024-11-20 07:44:04.853958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.907 [2024-11-20 07:44:04.854005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.907 [2024-11-20 07:44:04.854018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.907 [2024-11-20 07:44:04.854025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.907 [2024-11-20 07:44:04.854031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.907 [2024-11-20 07:44:04.854046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.907 qpair failed and we were unable to recover it. 00:29:46.907 [2024-11-20 07:44:04.864182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.907 [2024-11-20 07:44:04.864250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.907 [2024-11-20 07:44:04.864262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.907 [2024-11-20 07:44:04.864269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.907 [2024-11-20 07:44:04.864276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.907 [2024-11-20 07:44:04.864290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.907 qpair failed and we were unable to recover it. 00:29:46.908 [2024-11-20 07:44:04.874170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.908 [2024-11-20 07:44:04.874217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.908 [2024-11-20 07:44:04.874233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.908 [2024-11-20 07:44:04.874240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.908 [2024-11-20 07:44:04.874246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.908 [2024-11-20 07:44:04.874260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.908 qpair failed and we were unable to recover it. 00:29:46.908 [2024-11-20 07:44:04.884215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.908 [2024-11-20 07:44:04.884300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.908 [2024-11-20 07:44:04.884313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.908 [2024-11-20 07:44:04.884320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.908 [2024-11-20 07:44:04.884326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.908 [2024-11-20 07:44:04.884340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.908 qpair failed and we were unable to recover it. 00:29:46.908 [2024-11-20 07:44:04.894209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.908 [2024-11-20 07:44:04.894254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.908 [2024-11-20 07:44:04.894267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.908 [2024-11-20 07:44:04.894274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.908 [2024-11-20 07:44:04.894281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.908 [2024-11-20 07:44:04.894295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.908 qpair failed and we were unable to recover it. 00:29:46.908 [2024-11-20 07:44:04.904252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.908 [2024-11-20 07:44:04.904300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.908 [2024-11-20 07:44:04.904312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.908 [2024-11-20 07:44:04.904319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.908 [2024-11-20 07:44:04.904326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.908 [2024-11-20 07:44:04.904340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.908 qpair failed and we were unable to recover it. 00:29:46.908 [2024-11-20 07:44:04.914274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.908 [2024-11-20 07:44:04.914320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.908 [2024-11-20 07:44:04.914333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.908 [2024-11-20 07:44:04.914340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.908 [2024-11-20 07:44:04.914346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.908 [2024-11-20 07:44:04.914363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.908 qpair failed and we were unable to recover it. 00:29:46.908 [2024-11-20 07:44:04.924314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.908 [2024-11-20 07:44:04.924410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.908 [2024-11-20 07:44:04.924425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.908 [2024-11-20 07:44:04.924432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.908 [2024-11-20 07:44:04.924439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.908 [2024-11-20 07:44:04.924456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.908 qpair failed and we were unable to recover it. 00:29:46.908 [2024-11-20 07:44:04.934187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.908 [2024-11-20 07:44:04.934245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.908 [2024-11-20 07:44:04.934260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.908 [2024-11-20 07:44:04.934267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.908 [2024-11-20 07:44:04.934273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.908 [2024-11-20 07:44:04.934293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.908 qpair failed and we were unable to recover it. 00:29:46.908 [2024-11-20 07:44:04.944350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.908 [2024-11-20 07:44:04.944397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.908 [2024-11-20 07:44:04.944411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.908 [2024-11-20 07:44:04.944418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.908 [2024-11-20 07:44:04.944424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.908 [2024-11-20 07:44:04.944439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.908 qpair failed and we were unable to recover it. 00:29:46.908 [2024-11-20 07:44:04.954360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.908 [2024-11-20 07:44:04.954423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.908 [2024-11-20 07:44:04.954437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.908 [2024-11-20 07:44:04.954444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.908 [2024-11-20 07:44:04.954450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.908 [2024-11-20 07:44:04.954466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.908 qpair failed and we were unable to recover it. 00:29:46.908 [2024-11-20 07:44:04.964429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.908 [2024-11-20 07:44:04.964479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.908 [2024-11-20 07:44:04.964494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.908 [2024-11-20 07:44:04.964501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.908 [2024-11-20 07:44:04.964507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.908 [2024-11-20 07:44:04.964521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.908 qpair failed and we were unable to recover it. 00:29:46.908 [2024-11-20 07:44:04.974414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.908 [2024-11-20 07:44:04.974509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.908 [2024-11-20 07:44:04.974522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.908 [2024-11-20 07:44:04.974529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.908 [2024-11-20 07:44:04.974536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.908 [2024-11-20 07:44:04.974550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.908 qpair failed and we were unable to recover it. 00:29:46.908 [2024-11-20 07:44:04.984435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.908 [2024-11-20 07:44:04.984495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.908 [2024-11-20 07:44:04.984508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.908 [2024-11-20 07:44:04.984516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.908 [2024-11-20 07:44:04.984523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.908 [2024-11-20 07:44:04.984537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.908 qpair failed and we were unable to recover it. 00:29:46.908 [2024-11-20 07:44:04.994483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.908 [2024-11-20 07:44:04.994525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.908 [2024-11-20 07:44:04.994537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.908 [2024-11-20 07:44:04.994545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.908 [2024-11-20 07:44:04.994551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.909 [2024-11-20 07:44:04.994565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.909 qpair failed and we were unable to recover it. 00:29:46.909 [2024-11-20 07:44:05.004521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.909 [2024-11-20 07:44:05.004575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.909 [2024-11-20 07:44:05.004604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.909 [2024-11-20 07:44:05.004612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.909 [2024-11-20 07:44:05.004619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.909 [2024-11-20 07:44:05.004639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.909 qpair failed and we were unable to recover it. 00:29:46.909 [2024-11-20 07:44:05.014519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.909 [2024-11-20 07:44:05.014607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.909 [2024-11-20 07:44:05.014622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.909 [2024-11-20 07:44:05.014629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.909 [2024-11-20 07:44:05.014636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.909 [2024-11-20 07:44:05.014652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.909 qpair failed and we were unable to recover it. 00:29:46.909 [2024-11-20 07:44:05.024551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.909 [2024-11-20 07:44:05.024600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.909 [2024-11-20 07:44:05.024614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.909 [2024-11-20 07:44:05.024621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.909 [2024-11-20 07:44:05.024627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.909 [2024-11-20 07:44:05.024642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.909 qpair failed and we were unable to recover it. 00:29:46.909 [2024-11-20 07:44:05.034605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.909 [2024-11-20 07:44:05.034652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.909 [2024-11-20 07:44:05.034665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.909 [2024-11-20 07:44:05.034672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.909 [2024-11-20 07:44:05.034679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.909 [2024-11-20 07:44:05.034693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.909 qpair failed and we were unable to recover it. 00:29:46.909 [2024-11-20 07:44:05.044640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.909 [2024-11-20 07:44:05.044691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.909 [2024-11-20 07:44:05.044704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.909 [2024-11-20 07:44:05.044711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.909 [2024-11-20 07:44:05.044721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.909 [2024-11-20 07:44:05.044736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.909 qpair failed and we were unable to recover it. 00:29:46.909 [2024-11-20 07:44:05.054594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.909 [2024-11-20 07:44:05.054640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.909 [2024-11-20 07:44:05.054653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.909 [2024-11-20 07:44:05.054660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.909 [2024-11-20 07:44:05.054666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.909 [2024-11-20 07:44:05.054681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.909 qpair failed and we were unable to recover it. 00:29:46.909 [2024-11-20 07:44:05.064686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.909 [2024-11-20 07:44:05.064735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.909 [2024-11-20 07:44:05.064752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.909 [2024-11-20 07:44:05.064759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.909 [2024-11-20 07:44:05.064765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.909 [2024-11-20 07:44:05.064780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.909 qpair failed and we were unable to recover it. 00:29:46.909 [2024-11-20 07:44:05.074698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.909 [2024-11-20 07:44:05.074747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.909 [2024-11-20 07:44:05.074761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.909 [2024-11-20 07:44:05.074768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.909 [2024-11-20 07:44:05.074775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.909 [2024-11-20 07:44:05.074789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.909 qpair failed and we were unable to recover it. 00:29:46.909 [2024-11-20 07:44:05.084759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.909 [2024-11-20 07:44:05.084808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.909 [2024-11-20 07:44:05.084821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.909 [2024-11-20 07:44:05.084828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.909 [2024-11-20 07:44:05.084835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.909 [2024-11-20 07:44:05.084850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.909 qpair failed and we were unable to recover it. 00:29:46.909 [2024-11-20 07:44:05.094690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.909 [2024-11-20 07:44:05.094731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.909 [2024-11-20 07:44:05.094744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.909 [2024-11-20 07:44:05.094756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.909 [2024-11-20 07:44:05.094762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.909 [2024-11-20 07:44:05.094777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.909 qpair failed and we were unable to recover it. 00:29:46.909 [2024-11-20 07:44:05.104741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.909 [2024-11-20 07:44:05.104789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.909 [2024-11-20 07:44:05.104802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.909 [2024-11-20 07:44:05.104809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.909 [2024-11-20 07:44:05.104815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:46.909 [2024-11-20 07:44:05.104829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.909 qpair failed and we were unable to recover it. 00:29:47.171 [2024-11-20 07:44:05.114808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.171 [2024-11-20 07:44:05.114853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.171 [2024-11-20 07:44:05.114866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.171 [2024-11-20 07:44:05.114873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.171 [2024-11-20 07:44:05.114880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.171 [2024-11-20 07:44:05.114893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.171 qpair failed and we were unable to recover it. 00:29:47.171 [2024-11-20 07:44:05.124723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.171 [2024-11-20 07:44:05.124774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.171 [2024-11-20 07:44:05.124788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.171 [2024-11-20 07:44:05.124795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.171 [2024-11-20 07:44:05.124801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.171 [2024-11-20 07:44:05.124815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.171 qpair failed and we were unable to recover it. 00:29:47.171 [2024-11-20 07:44:05.134836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.171 [2024-11-20 07:44:05.134882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.171 [2024-11-20 07:44:05.134902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.171 [2024-11-20 07:44:05.134910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.171 [2024-11-20 07:44:05.134916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.171 [2024-11-20 07:44:05.134930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.171 qpair failed and we were unable to recover it. 00:29:47.171 [2024-11-20 07:44:05.144859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.171 [2024-11-20 07:44:05.144905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.171 [2024-11-20 07:44:05.144918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.171 [2024-11-20 07:44:05.144925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.171 [2024-11-20 07:44:05.144931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.171 [2024-11-20 07:44:05.144945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.171 qpair failed and we were unable to recover it. 00:29:47.171 [2024-11-20 07:44:05.154922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.171 [2024-11-20 07:44:05.155020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.171 [2024-11-20 07:44:05.155033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.171 [2024-11-20 07:44:05.155040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.171 [2024-11-20 07:44:05.155046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.171 [2024-11-20 07:44:05.155061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.171 qpair failed and we were unable to recover it. 00:29:47.171 [2024-11-20 07:44:05.164876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.171 [2024-11-20 07:44:05.164920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.171 [2024-11-20 07:44:05.164933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.171 [2024-11-20 07:44:05.164939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.171 [2024-11-20 07:44:05.164946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.171 [2024-11-20 07:44:05.164960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.171 qpair failed and we were unable to recover it. 00:29:47.171 [2024-11-20 07:44:05.174964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.171 [2024-11-20 07:44:05.175009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.171 [2024-11-20 07:44:05.175021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.171 [2024-11-20 07:44:05.175032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.171 [2024-11-20 07:44:05.175038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.171 [2024-11-20 07:44:05.175053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.171 qpair failed and we were unable to recover it. 00:29:47.171 [2024-11-20 07:44:05.184993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.171 [2024-11-20 07:44:05.185066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.171 [2024-11-20 07:44:05.185079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.171 [2024-11-20 07:44:05.185086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.171 [2024-11-20 07:44:05.185092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.171 [2024-11-20 07:44:05.185106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.171 qpair failed and we were unable to recover it. 00:29:47.171 [2024-11-20 07:44:05.194906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.171 [2024-11-20 07:44:05.194956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.171 [2024-11-20 07:44:05.194969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.171 [2024-11-20 07:44:05.194976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.171 [2024-11-20 07:44:05.194982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.171 [2024-11-20 07:44:05.194996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.171 qpair failed and we were unable to recover it. 00:29:47.171 [2024-11-20 07:44:05.205076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.171 [2024-11-20 07:44:05.205122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.171 [2024-11-20 07:44:05.205134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.171 [2024-11-20 07:44:05.205142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.171 [2024-11-20 07:44:05.205148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.171 [2024-11-20 07:44:05.205162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.171 qpair failed and we were unable to recover it. 00:29:47.171 [2024-11-20 07:44:05.215047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.172 [2024-11-20 07:44:05.215089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.172 [2024-11-20 07:44:05.215102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.172 [2024-11-20 07:44:05.215109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.172 [2024-11-20 07:44:05.215115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.172 [2024-11-20 07:44:05.215130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.172 qpair failed and we were unable to recover it. 00:29:47.172 [2024-11-20 07:44:05.225074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.172 [2024-11-20 07:44:05.225122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.172 [2024-11-20 07:44:05.225134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.172 [2024-11-20 07:44:05.225141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.172 [2024-11-20 07:44:05.225147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.172 [2024-11-20 07:44:05.225161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.172 qpair failed and we were unable to recover it. 00:29:47.172 [2024-11-20 07:44:05.235140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.172 [2024-11-20 07:44:05.235238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.172 [2024-11-20 07:44:05.235251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.172 [2024-11-20 07:44:05.235258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.172 [2024-11-20 07:44:05.235264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.172 [2024-11-20 07:44:05.235278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.172 qpair failed and we were unable to recover it. 00:29:47.172 [2024-11-20 07:44:05.245196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.172 [2024-11-20 07:44:05.245239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.172 [2024-11-20 07:44:05.245252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.172 [2024-11-20 07:44:05.245258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.172 [2024-11-20 07:44:05.245265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.172 [2024-11-20 07:44:05.245279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.172 qpair failed and we were unable to recover it. 00:29:47.172 [2024-11-20 07:44:05.255236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.172 [2024-11-20 07:44:05.255282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.172 [2024-11-20 07:44:05.255295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.172 [2024-11-20 07:44:05.255302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.172 [2024-11-20 07:44:05.255308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.172 [2024-11-20 07:44:05.255322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.172 qpair failed and we were unable to recover it. 00:29:47.172 [2024-11-20 07:44:05.265248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.172 [2024-11-20 07:44:05.265298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.172 [2024-11-20 07:44:05.265312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.172 [2024-11-20 07:44:05.265318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.172 [2024-11-20 07:44:05.265325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.172 [2024-11-20 07:44:05.265339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.172 qpair failed and we were unable to recover it. 00:29:47.172 [2024-11-20 07:44:05.275264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.172 [2024-11-20 07:44:05.275314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.172 [2024-11-20 07:44:05.275327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.172 [2024-11-20 07:44:05.275334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.172 [2024-11-20 07:44:05.275340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.172 [2024-11-20 07:44:05.275354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.172 qpair failed and we were unable to recover it. 00:29:47.172 [2024-11-20 07:44:05.285345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.172 [2024-11-20 07:44:05.285430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.172 [2024-11-20 07:44:05.285443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.172 [2024-11-20 07:44:05.285450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.172 [2024-11-20 07:44:05.285456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.172 [2024-11-20 07:44:05.285470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.172 qpair failed and we were unable to recover it. 00:29:47.172 [2024-11-20 07:44:05.295285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.172 [2024-11-20 07:44:05.295327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.172 [2024-11-20 07:44:05.295340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.172 [2024-11-20 07:44:05.295347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.172 [2024-11-20 07:44:05.295353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.172 [2024-11-20 07:44:05.295367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.172 qpair failed and we were unable to recover it. 00:29:47.172 [2024-11-20 07:44:05.305352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.172 [2024-11-20 07:44:05.305401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.172 [2024-11-20 07:44:05.305414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.172 [2024-11-20 07:44:05.305425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.172 [2024-11-20 07:44:05.305431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.172 [2024-11-20 07:44:05.305445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.172 qpair failed and we were unable to recover it. 00:29:47.172 [2024-11-20 07:44:05.315369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.172 [2024-11-20 07:44:05.315456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.172 [2024-11-20 07:44:05.315469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.172 [2024-11-20 07:44:05.315476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.172 [2024-11-20 07:44:05.315482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.172 [2024-11-20 07:44:05.315496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.172 qpair failed and we were unable to recover it. 00:29:47.172 [2024-11-20 07:44:05.325429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.172 [2024-11-20 07:44:05.325510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.172 [2024-11-20 07:44:05.325523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.172 [2024-11-20 07:44:05.325531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.172 [2024-11-20 07:44:05.325537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.172 [2024-11-20 07:44:05.325551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.172 qpair failed and we were unable to recover it. 00:29:47.172 [2024-11-20 07:44:05.335383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.172 [2024-11-20 07:44:05.335425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.172 [2024-11-20 07:44:05.335438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.172 [2024-11-20 07:44:05.335445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.172 [2024-11-20 07:44:05.335451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.172 [2024-11-20 07:44:05.335465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.172 qpair failed and we were unable to recover it. 00:29:47.172 [2024-11-20 07:44:05.345428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.173 [2024-11-20 07:44:05.345474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.173 [2024-11-20 07:44:05.345487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.173 [2024-11-20 07:44:05.345494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.173 [2024-11-20 07:44:05.345501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.173 [2024-11-20 07:44:05.345518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.173 qpair failed and we were unable to recover it. 00:29:47.173 [2024-11-20 07:44:05.355443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.173 [2024-11-20 07:44:05.355494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.173 [2024-11-20 07:44:05.355509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.173 [2024-11-20 07:44:05.355516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.173 [2024-11-20 07:44:05.355522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.173 [2024-11-20 07:44:05.355536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.173 qpair failed and we were unable to recover it. 00:29:47.173 [2024-11-20 07:44:05.365538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.173 [2024-11-20 07:44:05.365610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.173 [2024-11-20 07:44:05.365623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.173 [2024-11-20 07:44:05.365630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.173 [2024-11-20 07:44:05.365636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.173 [2024-11-20 07:44:05.365651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.173 qpair failed and we were unable to recover it. 00:29:47.435 [2024-11-20 07:44:05.375512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.435 [2024-11-20 07:44:05.375558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.435 [2024-11-20 07:44:05.375571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.435 [2024-11-20 07:44:05.375578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.435 [2024-11-20 07:44:05.375585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.435 [2024-11-20 07:44:05.375599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.435 qpair failed and we were unable to recover it. 00:29:47.435 [2024-11-20 07:44:05.385397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.435 [2024-11-20 07:44:05.385440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.435 [2024-11-20 07:44:05.385454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.435 [2024-11-20 07:44:05.385461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.435 [2024-11-20 07:44:05.385467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.435 [2024-11-20 07:44:05.385481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.435 qpair failed and we were unable to recover it. 00:29:47.435 [2024-11-20 07:44:05.395568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.435 [2024-11-20 07:44:05.395621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.435 [2024-11-20 07:44:05.395634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.435 [2024-11-20 07:44:05.395641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.435 [2024-11-20 07:44:05.395647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.435 [2024-11-20 07:44:05.395661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.435 qpair failed and we were unable to recover it. 00:29:47.435 [2024-11-20 07:44:05.405620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.435 [2024-11-20 07:44:05.405669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.435 [2024-11-20 07:44:05.405681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.435 [2024-11-20 07:44:05.405688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.435 [2024-11-20 07:44:05.405694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.435 [2024-11-20 07:44:05.405708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.435 qpair failed and we were unable to recover it. 00:29:47.435 [2024-11-20 07:44:05.415599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.435 [2024-11-20 07:44:05.415639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.435 [2024-11-20 07:44:05.415651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.435 [2024-11-20 07:44:05.415659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.435 [2024-11-20 07:44:05.415665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.435 [2024-11-20 07:44:05.415679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.435 qpair failed and we were unable to recover it. 00:29:47.435 [2024-11-20 07:44:05.425548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.435 [2024-11-20 07:44:05.425595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.435 [2024-11-20 07:44:05.425608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.435 [2024-11-20 07:44:05.425615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.435 [2024-11-20 07:44:05.425621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.435 [2024-11-20 07:44:05.425635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.435 qpair failed and we were unable to recover it. 00:29:47.435 [2024-11-20 07:44:05.435681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.435 [2024-11-20 07:44:05.435726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.435 [2024-11-20 07:44:05.435742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.436 [2024-11-20 07:44:05.435754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.436 [2024-11-20 07:44:05.435760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.436 [2024-11-20 07:44:05.435775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.436 qpair failed and we were unable to recover it. 00:29:47.436 [2024-11-20 07:44:05.445770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.436 [2024-11-20 07:44:05.445816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.436 [2024-11-20 07:44:05.445829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.436 [2024-11-20 07:44:05.445836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.436 [2024-11-20 07:44:05.445842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.436 [2024-11-20 07:44:05.445856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.436 qpair failed and we were unable to recover it. 00:29:47.436 [2024-11-20 07:44:05.455712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.436 [2024-11-20 07:44:05.455760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.436 [2024-11-20 07:44:05.455773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.436 [2024-11-20 07:44:05.455780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.436 [2024-11-20 07:44:05.455786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.436 [2024-11-20 07:44:05.455800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.436 qpair failed and we were unable to recover it. 00:29:47.436 [2024-11-20 07:44:05.465749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.436 [2024-11-20 07:44:05.465794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.436 [2024-11-20 07:44:05.465807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.436 [2024-11-20 07:44:05.465814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.436 [2024-11-20 07:44:05.465820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.436 [2024-11-20 07:44:05.465835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.436 qpair failed and we were unable to recover it. 00:29:47.436 [2024-11-20 07:44:05.475764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.436 [2024-11-20 07:44:05.475815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.436 [2024-11-20 07:44:05.475828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.436 [2024-11-20 07:44:05.475835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.436 [2024-11-20 07:44:05.475841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.436 [2024-11-20 07:44:05.475860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.436 qpair failed and we were unable to recover it. 00:29:47.436 [2024-11-20 07:44:05.485699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.436 [2024-11-20 07:44:05.485782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.436 [2024-11-20 07:44:05.485796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.436 [2024-11-20 07:44:05.485804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.436 [2024-11-20 07:44:05.485811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.436 [2024-11-20 07:44:05.485826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.436 qpair failed and we were unable to recover it. 00:29:47.436 [2024-11-20 07:44:05.495830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.436 [2024-11-20 07:44:05.495877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.436 [2024-11-20 07:44:05.495889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.436 [2024-11-20 07:44:05.495896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.436 [2024-11-20 07:44:05.495903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.436 [2024-11-20 07:44:05.495917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.436 qpair failed and we were unable to recover it. 00:29:47.436 [2024-11-20 07:44:05.505710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.436 [2024-11-20 07:44:05.505785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.436 [2024-11-20 07:44:05.505798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.436 [2024-11-20 07:44:05.505805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.436 [2024-11-20 07:44:05.505811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.436 [2024-11-20 07:44:05.505825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.436 qpair failed and we were unable to recover it. 00:29:47.436 [2024-11-20 07:44:05.515776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.436 [2024-11-20 07:44:05.515827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.436 [2024-11-20 07:44:05.515840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.436 [2024-11-20 07:44:05.515847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.436 [2024-11-20 07:44:05.515853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.436 [2024-11-20 07:44:05.515867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.436 qpair failed and we were unable to recover it. 00:29:47.436 [2024-11-20 07:44:05.525930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.436 [2024-11-20 07:44:05.525980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.436 [2024-11-20 07:44:05.525992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.436 [2024-11-20 07:44:05.525999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.436 [2024-11-20 07:44:05.526006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.436 [2024-11-20 07:44:05.526020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.436 qpair failed and we were unable to recover it. 00:29:47.436 [2024-11-20 07:44:05.535941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.436 [2024-11-20 07:44:05.535983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.436 [2024-11-20 07:44:05.535996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.436 [2024-11-20 07:44:05.536003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.436 [2024-11-20 07:44:05.536009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.436 [2024-11-20 07:44:05.536023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.436 qpair failed and we were unable to recover it. 00:29:47.436 [2024-11-20 07:44:05.545937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.436 [2024-11-20 07:44:05.545983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.436 [2024-11-20 07:44:05.545996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.436 [2024-11-20 07:44:05.546003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.436 [2024-11-20 07:44:05.546009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.436 [2024-11-20 07:44:05.546023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.436 qpair failed and we were unable to recover it. 00:29:47.436 [2024-11-20 07:44:05.555997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.436 [2024-11-20 07:44:05.556074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.436 [2024-11-20 07:44:05.556087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.436 [2024-11-20 07:44:05.556094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.436 [2024-11-20 07:44:05.556100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.436 [2024-11-20 07:44:05.556114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.436 qpair failed and we were unable to recover it. 00:29:47.436 [2024-11-20 07:44:05.566095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.436 [2024-11-20 07:44:05.566142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.436 [2024-11-20 07:44:05.566158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.437 [2024-11-20 07:44:05.566166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.437 [2024-11-20 07:44:05.566172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.437 [2024-11-20 07:44:05.566186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.437 qpair failed and we were unable to recover it. 00:29:47.437 [2024-11-20 07:44:05.575999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.437 [2024-11-20 07:44:05.576041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.437 [2024-11-20 07:44:05.576054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.437 [2024-11-20 07:44:05.576061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.437 [2024-11-20 07:44:05.576067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.437 [2024-11-20 07:44:05.576081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.437 qpair failed and we were unable to recover it. 00:29:47.437 [2024-11-20 07:44:05.586061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.437 [2024-11-20 07:44:05.586108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.437 [2024-11-20 07:44:05.586121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.437 [2024-11-20 07:44:05.586128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.437 [2024-11-20 07:44:05.586134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.437 [2024-11-20 07:44:05.586148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.437 qpair failed and we were unable to recover it. 00:29:47.437 [2024-11-20 07:44:05.596096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.437 [2024-11-20 07:44:05.596144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.437 [2024-11-20 07:44:05.596157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.437 [2024-11-20 07:44:05.596164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.437 [2024-11-20 07:44:05.596170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.437 [2024-11-20 07:44:05.596184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.437 qpair failed and we were unable to recover it. 00:29:47.437 [2024-11-20 07:44:05.606142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.437 [2024-11-20 07:44:05.606190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.437 [2024-11-20 07:44:05.606202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.437 [2024-11-20 07:44:05.606209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.437 [2024-11-20 07:44:05.606219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.437 [2024-11-20 07:44:05.606233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.437 qpair failed and we were unable to recover it. 00:29:47.437 [2024-11-20 07:44:05.616137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.437 [2024-11-20 07:44:05.616178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.437 [2024-11-20 07:44:05.616191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.437 [2024-11-20 07:44:05.616197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.437 [2024-11-20 07:44:05.616203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.437 [2024-11-20 07:44:05.616218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.437 qpair failed and we were unable to recover it. 00:29:47.437 [2024-11-20 07:44:05.626180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.437 [2024-11-20 07:44:05.626261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.437 [2024-11-20 07:44:05.626273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.437 [2024-11-20 07:44:05.626280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.437 [2024-11-20 07:44:05.626287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.437 [2024-11-20 07:44:05.626301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.437 qpair failed and we were unable to recover it. 00:29:47.437 [2024-11-20 07:44:05.636212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.437 [2024-11-20 07:44:05.636268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.437 [2024-11-20 07:44:05.636281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.437 [2024-11-20 07:44:05.636288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.437 [2024-11-20 07:44:05.636295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.437 [2024-11-20 07:44:05.636309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.437 qpair failed and we were unable to recover it. 00:29:47.698 [2024-11-20 07:44:05.646294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-11-20 07:44:05.646367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.698 [2024-11-20 07:44:05.646379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.698 [2024-11-20 07:44:05.646386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.698 [2024-11-20 07:44:05.646392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.698 [2024-11-20 07:44:05.646406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.698 qpair failed and we were unable to recover it. 00:29:47.698 [2024-11-20 07:44:05.656251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-11-20 07:44:05.656297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.698 [2024-11-20 07:44:05.656310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.698 [2024-11-20 07:44:05.656317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.698 [2024-11-20 07:44:05.656324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa124000b90 00:29:47.698 [2024-11-20 07:44:05.656337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.698 qpair failed and we were unable to recover it. 00:29:47.698 [2024-11-20 07:44:05.666280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-11-20 07:44:05.666413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.698 [2024-11-20 07:44:05.666480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.698 [2024-11-20 07:44:05.666506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.698 [2024-11-20 07:44:05.666526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa11c000b90 00:29:47.698 [2024-11-20 07:44:05.666581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.698 qpair failed and we were unable to recover it. 00:29:47.698 [2024-11-20 07:44:05.676324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-11-20 07:44:05.676440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.698 [2024-11-20 07:44:05.676484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.698 [2024-11-20 07:44:05.676500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.698 [2024-11-20 07:44:05.676514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa11c000b90 00:29:47.698 [2024-11-20 07:44:05.676553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.698 qpair failed and we were unable to recover it. 00:29:47.698 [2024-11-20 07:44:05.686331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-11-20 07:44:05.686414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.698 [2024-11-20 07:44:05.686473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.698 [2024-11-20 07:44:05.686495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.698 [2024-11-20 07:44:05.686513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1fd0010 00:29:47.698 [2024-11-20 07:44:05.686559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.698 qpair failed and we were unable to recover it. 00:29:47.698 [2024-11-20 07:44:05.696387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-11-20 07:44:05.696476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.699 [2024-11-20 07:44:05.696543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.699 [2024-11-20 07:44:05.696566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.699 [2024-11-20 07:44:05.696584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1fd0010 00:29:47.699 [2024-11-20 07:44:05.696630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.699 qpair failed and we were unable to recover it. 00:29:47.699 [2024-11-20 07:44:05.696802] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:47.699 A controller has encountered a failure and is being reset. 00:29:47.699 [2024-11-20 07:44:05.696916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fddf30 (9): Bad file descriptor 00:29:47.699 Controller properly reset. 00:29:47.699 Initializing NVMe Controllers 00:29:47.699 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:47.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:47.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:47.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:47.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:47.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:47.699 Initialization complete. Launching workers. 00:29:47.699 Starting thread on core 1 00:29:47.699 Starting thread on core 2 00:29:47.699 Starting thread on core 3 00:29:47.699 Starting thread on core 0 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:47.699 00:29:47.699 real 0m11.428s 00:29:47.699 user 0m21.895s 00:29:47.699 sys 0m4.010s 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:47.699 ************************************ 00:29:47.699 END TEST nvmf_target_disconnect_tc2 00:29:47.699 ************************************ 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:47.699 rmmod nvme_tcp 00:29:47.699 rmmod nvme_fabrics 00:29:47.699 rmmod nvme_keyring 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3590762 ']' 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3590762 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3590762 ']' 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 3590762 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:47.699 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3590762 00:29:47.960 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:29:47.960 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:29:47.960 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3590762' 00:29:47.960 killing process with pid 3590762 00:29:47.960 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 3590762 00:29:47.960 07:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 3590762 00:29:47.960 07:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:47.960 07:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:47.960 07:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:47.960 07:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:47.960 07:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:47.960 07:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:47.960 07:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:47.960 07:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:47.960 07:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:47.960 07:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.960 07:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.960 07:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.506 07:44:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:50.506 00:29:50.506 real 0m21.988s 00:29:50.506 user 0m49.749s 00:29:50.506 sys 0m10.232s 00:29:50.506 07:44:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:50.506 07:44:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:50.506 ************************************ 00:29:50.506 END TEST nvmf_target_disconnect 00:29:50.506 ************************************ 00:29:50.506 07:44:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:50.506 00:29:50.506 real 6m34.305s 00:29:50.506 user 11m25.435s 00:29:50.506 sys 2m17.447s 00:29:50.506 07:44:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:50.506 07:44:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.506 ************************************ 00:29:50.506 END TEST nvmf_host 00:29:50.506 ************************************ 00:29:50.506 07:44:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:50.506 07:44:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:50.506 07:44:08 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:50.506 07:44:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:50.506 07:44:08 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:50.506 07:44:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.506 ************************************ 00:29:50.506 START TEST nvmf_target_core_interrupt_mode 00:29:50.506 ************************************ 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:50.506 * Looking for test storage... 00:29:50.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:50.506 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:50.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.507 --rc genhtml_branch_coverage=1 00:29:50.507 --rc genhtml_function_coverage=1 00:29:50.507 --rc genhtml_legend=1 00:29:50.507 --rc geninfo_all_blocks=1 00:29:50.507 --rc geninfo_unexecuted_blocks=1 00:29:50.507 00:29:50.507 ' 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:50.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.507 --rc genhtml_branch_coverage=1 00:29:50.507 --rc genhtml_function_coverage=1 00:29:50.507 --rc genhtml_legend=1 00:29:50.507 --rc geninfo_all_blocks=1 00:29:50.507 --rc geninfo_unexecuted_blocks=1 00:29:50.507 00:29:50.507 ' 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:50.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.507 --rc genhtml_branch_coverage=1 00:29:50.507 --rc genhtml_function_coverage=1 00:29:50.507 --rc genhtml_legend=1 00:29:50.507 --rc geninfo_all_blocks=1 00:29:50.507 --rc geninfo_unexecuted_blocks=1 00:29:50.507 00:29:50.507 ' 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:50.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.507 --rc genhtml_branch_coverage=1 00:29:50.507 --rc genhtml_function_coverage=1 00:29:50.507 --rc genhtml_legend=1 00:29:50.507 --rc geninfo_all_blocks=1 00:29:50.507 --rc geninfo_unexecuted_blocks=1 00:29:50.507 00:29:50.507 ' 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:50.507 ************************************ 00:29:50.507 START TEST nvmf_abort 00:29:50.507 ************************************ 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:50.507 * Looking for test storage... 00:29:50.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:50.507 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:29:50.508 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:50.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.769 --rc genhtml_branch_coverage=1 00:29:50.769 --rc genhtml_function_coverage=1 00:29:50.769 --rc genhtml_legend=1 00:29:50.769 --rc geninfo_all_blocks=1 00:29:50.769 --rc geninfo_unexecuted_blocks=1 00:29:50.769 00:29:50.769 ' 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:50.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.769 --rc genhtml_branch_coverage=1 00:29:50.769 --rc genhtml_function_coverage=1 00:29:50.769 --rc genhtml_legend=1 00:29:50.769 --rc geninfo_all_blocks=1 00:29:50.769 --rc geninfo_unexecuted_blocks=1 00:29:50.769 00:29:50.769 ' 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:50.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.769 --rc genhtml_branch_coverage=1 00:29:50.769 --rc genhtml_function_coverage=1 00:29:50.769 --rc genhtml_legend=1 00:29:50.769 --rc geninfo_all_blocks=1 00:29:50.769 --rc geninfo_unexecuted_blocks=1 00:29:50.769 00:29:50.769 ' 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:50.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.769 --rc genhtml_branch_coverage=1 00:29:50.769 --rc genhtml_function_coverage=1 00:29:50.769 --rc genhtml_legend=1 00:29:50.769 --rc geninfo_all_blocks=1 00:29:50.769 --rc geninfo_unexecuted_blocks=1 00:29:50.769 00:29:50.769 ' 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.769 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:50.770 07:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:58.915 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:58.915 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:58.915 Found net devices under 0000:31:00.0: cvl_0_0 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:58.915 Found net devices under 0000:31:00.1: cvl_0_1 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:58.915 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:58.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:58.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:29:58.916 00:29:58.916 --- 10.0.0.2 ping statistics --- 00:29:58.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.916 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:58.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:58.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:29:58.916 00:29:58.916 --- 10.0.0.1 ping statistics --- 00:29:58.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.916 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3596290 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3596290 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3596290 ']' 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:58.916 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.916 [2024-11-20 07:44:16.503794] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:58.916 [2024-11-20 07:44:16.504953] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:29:58.916 [2024-11-20 07:44:16.505006] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.916 [2024-11-20 07:44:16.606836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:58.916 [2024-11-20 07:44:16.657605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.916 [2024-11-20 07:44:16.657659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.916 [2024-11-20 07:44:16.657668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.916 [2024-11-20 07:44:16.657674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.916 [2024-11-20 07:44:16.657680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.916 [2024-11-20 07:44:16.659568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:58.916 [2024-11-20 07:44:16.659728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.916 [2024-11-20 07:44:16.659729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:58.916 [2024-11-20 07:44:16.737594] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:58.916 [2024-11-20 07:44:16.738630] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:58.916 [2024-11-20 07:44:16.739043] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:58.916 [2024-11-20 07:44:16.739234] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:59.177 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:59.177 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:29:59.177 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.177 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:59.177 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.177 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.177 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:59.178 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.178 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.178 [2024-11-20 07:44:17.380615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.439 Malloc0 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.439 Delay0 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.439 [2024-11-20 07:44:17.484615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.439 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:59.439 [2024-11-20 07:44:17.631560] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:01.991 Initializing NVMe Controllers 00:30:01.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:01.991 controller IO queue size 128 less than required 00:30:01.991 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:01.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:01.991 Initialization complete. Launching workers. 00:30:01.991 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28603 00:30:01.991 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28660, failed to submit 66 00:30:01.991 success 28603, unsuccessful 57, failed 0 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:01.991 rmmod nvme_tcp 00:30:01.991 rmmod nvme_fabrics 00:30:01.991 rmmod nvme_keyring 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3596290 ']' 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3596290 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3596290 ']' 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3596290 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3596290 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3596290' 00:30:01.991 killing process with pid 3596290 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3596290 00:30:01.991 07:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3596290 00:30:01.991 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:01.991 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:01.991 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:01.991 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:01.991 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:01.991 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:01.991 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:01.991 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:01.991 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:01.991 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.991 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.991 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:04.540 00:30:04.540 real 0m13.577s 00:30:04.540 user 0m10.995s 00:30:04.540 sys 0m7.109s 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:04.540 ************************************ 00:30:04.540 END TEST nvmf_abort 00:30:04.540 ************************************ 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:04.540 ************************************ 00:30:04.540 START TEST nvmf_ns_hotplug_stress 00:30:04.540 ************************************ 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:04.540 * Looking for test storage... 00:30:04.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:04.540 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:04.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.541 --rc genhtml_branch_coverage=1 00:30:04.541 --rc genhtml_function_coverage=1 00:30:04.541 --rc genhtml_legend=1 00:30:04.541 --rc geninfo_all_blocks=1 00:30:04.541 --rc geninfo_unexecuted_blocks=1 00:30:04.541 00:30:04.541 ' 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:04.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.541 --rc genhtml_branch_coverage=1 00:30:04.541 --rc genhtml_function_coverage=1 00:30:04.541 --rc genhtml_legend=1 00:30:04.541 --rc geninfo_all_blocks=1 00:30:04.541 --rc geninfo_unexecuted_blocks=1 00:30:04.541 00:30:04.541 ' 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:04.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.541 --rc genhtml_branch_coverage=1 00:30:04.541 --rc genhtml_function_coverage=1 00:30:04.541 --rc genhtml_legend=1 00:30:04.541 --rc geninfo_all_blocks=1 00:30:04.541 --rc geninfo_unexecuted_blocks=1 00:30:04.541 00:30:04.541 ' 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:04.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.541 --rc genhtml_branch_coverage=1 00:30:04.541 --rc genhtml_function_coverage=1 00:30:04.541 --rc genhtml_legend=1 00:30:04.541 --rc geninfo_all_blocks=1 00:30:04.541 --rc geninfo_unexecuted_blocks=1 00:30:04.541 00:30:04.541 ' 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.541 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.542 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:04.542 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:04.542 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:04.542 07:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:12.682 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.682 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:12.682 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:12.683 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:12.683 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:12.683 Found net devices under 0000:31:00.0: cvl_0_0 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:12.683 Found net devices under 0000:31:00.1: cvl_0_1 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:12.683 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.684 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:12.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:30:12.684 00:30:12.684 --- 10.0.0.2 ping statistics --- 00:30:12.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.684 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:12.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:30:12.684 00:30:12.684 --- 10.0.0.1 ping statistics --- 00:30:12.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.684 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3601094 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3601094 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3601094 ']' 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:12.684 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:12.684 [2024-11-20 07:44:30.142083] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:12.684 [2024-11-20 07:44:30.143303] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:30:12.684 [2024-11-20 07:44:30.143359] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.684 [2024-11-20 07:44:30.246415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:12.684 [2024-11-20 07:44:30.298253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.684 [2024-11-20 07:44:30.298304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.684 [2024-11-20 07:44:30.298313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.684 [2024-11-20 07:44:30.298326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.684 [2024-11-20 07:44:30.298332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.684 [2024-11-20 07:44:30.300203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:12.684 [2024-11-20 07:44:30.300369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.684 [2024-11-20 07:44:30.300370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:12.684 [2024-11-20 07:44:30.378662] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:12.684 [2024-11-20 07:44:30.379781] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:12.684 [2024-11-20 07:44:30.380268] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:12.684 [2024-11-20 07:44:30.380399] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:12.945 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:12.945 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:30:12.945 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:12.945 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:12.945 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:12.945 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.945 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:12.945 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:13.205 [2024-11-20 07:44:31.173275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.205 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:13.466 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:13.466 [2024-11-20 07:44:31.581533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.466 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:13.727 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:13.988 Malloc0 00:30:13.988 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:13.988 Delay0 00:30:13.988 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.249 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:14.509 NULL1 00:30:14.509 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:14.770 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:14.770 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3601704 00:30:14.770 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:14.770 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.030 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.030 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:15.030 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:15.290 true 00:30:15.290 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:15.291 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.552 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.813 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:15.814 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:15.814 true 00:30:15.814 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:15.814 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.075 07:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.336 07:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:16.336 07:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:16.598 true 00:30:16.598 07:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:16.598 07:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.859 07:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.859 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:16.859 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:17.120 true 00:30:17.120 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:17.120 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.380 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.640 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:17.640 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:17.640 true 00:30:17.640 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:17.640 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.900 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.159 07:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:18.159 07:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:18.159 true 00:30:18.159 07:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:18.159 07:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.433 07:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.748 07:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:18.748 07:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:18.748 true 00:30:18.748 07:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:18.748 07:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.017 07:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.278 07:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:19.278 07:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:19.278 true 00:30:19.278 07:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:19.278 07:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.540 07:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.801 07:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:19.801 07:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:19.801 true 00:30:19.801 07:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:19.801 07:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.061 07:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.323 07:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:20.323 07:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:20.323 true 00:30:20.584 07:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:20.584 07:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.584 07:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.845 07:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:20.845 07:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:21.106 true 00:30:21.106 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:21.106 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.106 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.367 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:21.367 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:21.627 true 00:30:21.627 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:21.627 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.627 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.889 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:21.889 07:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:22.150 true 00:30:22.150 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:22.150 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.412 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.412 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:22.412 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:22.673 true 00:30:22.673 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:22.673 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.934 07:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.934 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:22.934 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:23.195 true 00:30:23.195 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:23.195 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.456 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.456 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:23.456 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:23.717 true 00:30:23.717 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:23.717 07:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.977 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.239 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:24.239 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:24.239 true 00:30:24.239 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:24.239 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.500 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.762 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:24.762 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:24.762 true 00:30:24.762 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:24.762 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.023 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.285 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:25.285 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:25.285 true 00:30:25.285 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:25.285 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.546 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.806 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:25.806 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:25.806 true 00:30:25.806 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:25.806 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.067 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:26.327 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:26.327 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:26.327 true 00:30:26.589 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:26.589 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.589 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:26.850 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:26.850 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:27.111 true 00:30:27.111 07:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:27.111 07:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.111 07:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.372 07:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:27.372 07:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:27.633 true 00:30:27.633 07:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:27.633 07:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.633 07:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.894 07:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:27.894 07:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:28.155 true 00:30:28.155 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:28.155 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.155 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.415 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:28.416 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:28.677 true 00:30:28.677 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:28.677 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.677 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.938 07:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:28.938 07:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:29.198 true 00:30:29.198 07:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:29.198 07:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.198 07:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.459 07:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:29.459 07:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:29.720 true 00:30:29.720 07:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:29.720 07:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.982 07:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.982 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:29.982 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:30.243 true 00:30:30.243 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:30.243 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.505 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.505 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:30.505 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:30.765 true 00:30:30.765 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:30.765 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.026 07:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.286 07:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:31.286 07:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:31.286 true 00:30:31.286 07:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:31.286 07:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.547 07:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.808 07:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:31.808 07:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:31.808 true 00:30:31.808 07:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:31.808 07:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.069 07:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.330 07:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:32.330 07:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:32.330 true 00:30:32.330 07:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:32.330 07:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.590 07:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.851 07:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:32.851 07:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:32.851 true 00:30:33.111 07:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:33.112 07:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.112 07:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.373 07:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:33.373 07:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:33.373 true 00:30:33.634 07:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:33.634 07:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.634 07:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.895 07:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:33.895 07:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:34.156 true 00:30:34.156 07:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:34.156 07:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.156 07:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.418 07:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:30:34.418 07:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:30:34.679 true 00:30:34.679 07:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:34.679 07:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.680 07:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.940 07:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:30:34.940 07:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:30:34.940 true 00:30:35.201 07:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:35.201 07:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.201 07:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.461 07:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:30:35.461 07:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:30:35.722 true 00:30:35.722 07:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:35.722 07:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.722 07:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.983 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:30:35.983 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:30:36.245 true 00:30:36.245 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:36.245 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.245 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.506 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:30:36.506 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:30:36.767 true 00:30:36.767 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:36.767 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.767 07:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.029 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:30:37.029 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:30:37.290 true 00:30:37.290 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:37.290 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.550 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.550 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:30:37.550 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:30:37.811 true 00:30:37.811 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:37.811 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.071 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.071 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:30:38.071 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:30:38.332 true 00:30:38.332 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:38.332 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.592 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.592 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:30:38.592 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:30:38.853 true 00:30:38.853 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:38.853 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.113 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.373 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:30:39.373 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:30:39.373 true 00:30:39.373 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:39.373 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.634 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.894 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:30:39.894 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:30:39.894 true 00:30:39.894 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:39.894 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.154 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.415 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:30:40.415 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:30:40.415 true 00:30:40.676 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:40.676 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.676 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.936 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:30:40.936 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:30:41.203 true 00:30:41.203 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:41.203 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.203 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.464 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:30:41.464 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:30:41.724 true 00:30:41.724 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:41.724 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.724 07:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.984 07:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:30:41.984 07:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:30:42.243 true 00:30:42.243 07:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:42.243 07:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.243 07:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.503 07:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:30:42.503 07:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:30:42.764 true 00:30:42.764 07:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:42.764 07:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.025 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.025 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:30:43.025 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:30:43.286 true 00:30:43.286 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:43.286 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.546 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.806 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:30:43.806 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:30:43.806 true 00:30:43.806 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:43.807 07:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.067 07:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.327 07:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:30:44.327 07:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:30:44.327 true 00:30:44.327 07:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:44.327 07:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.586 07:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.846 07:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:30:44.846 07:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:30:44.846 true 00:30:45.106 07:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:45.106 07:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.106 Initializing NVMe Controllers 00:30:45.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:45.106 Controller IO queue size 128, less than required. 00:30:45.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:45.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:45.106 Initialization complete. Launching workers. 00:30:45.106 ======================================================== 00:30:45.106 Latency(us) 00:30:45.106 Device Information : IOPS MiB/s Average min max 00:30:45.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30390.18 14.84 4212.02 1136.33 11188.47 00:30:45.106 ======================================================== 00:30:45.106 Total : 30390.18 14.84 4212.02 1136.33 11188.47 00:30:45.106 00:30:45.106 07:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.366 07:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:30:45.366 07:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:30:45.366 true 00:30:45.625 07:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3601704 00:30:45.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3601704) - No such process 00:30:45.625 07:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3601704 00:30:45.626 07:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.626 07:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:45.885 07:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:45.885 07:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:45.885 07:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:45.885 07:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:45.885 07:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:45.885 null0 00:30:45.885 07:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:45.885 07:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:45.885 07:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:46.144 null1 00:30:46.144 07:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:46.144 07:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:46.144 07:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:46.404 null2 00:30:46.404 07:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:46.404 07:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:46.404 07:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:46.663 null3 00:30:46.663 07:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:46.663 07:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:46.663 07:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:46.663 null4 00:30:46.663 07:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:46.663 07:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:46.663 07:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:46.923 null5 00:30:46.923 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:46.923 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:46.923 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:47.182 null6 00:30:47.182 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:47.182 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:47.182 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:47.182 null7 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:47.443 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3608004 3608006 3608007 3608009 3608011 3608013 3608015 3608016 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:47.444 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.705 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:47.965 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:47.965 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:47.965 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.965 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:47.965 07:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:47.965 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:47.965 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:47.966 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:47.966 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.966 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.966 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:48.226 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:48.227 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:48.227 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:48.227 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:48.227 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:48.227 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:48.487 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:48.747 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:48.747 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:48.747 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:48.747 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:48.747 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:48.747 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:48.747 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.747 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.747 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.747 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:48.747 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.747 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.748 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:48.748 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.748 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.748 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:48.748 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.748 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.748 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:49.008 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.008 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.008 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:49.008 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.008 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.008 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:49.008 07:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.008 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.008 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:49.008 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.008 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.008 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:49.008 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:49.008 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:49.008 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:49.008 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:49.008 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:49.008 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.008 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:49.008 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:49.008 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.008 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.008 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:49.268 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:49.531 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:49.531 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:49.531 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.531 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.531 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:49.531 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.531 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:49.531 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.531 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.531 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:49.531 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.532 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.532 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:49.532 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.532 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.532 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:49.532 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.532 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.532 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:49.532 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.532 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.532 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.793 07:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.055 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.316 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:50.577 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.838 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:50.838 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:50.838 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:51.099 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:51.099 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:51.099 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.099 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.099 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.099 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.099 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.099 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.099 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.099 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.099 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.099 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.099 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.099 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.099 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:51.361 rmmod nvme_tcp 00:30:51.361 rmmod nvme_fabrics 00:30:51.361 rmmod nvme_keyring 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3601094 ']' 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3601094 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3601094 ']' 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3601094 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3601094 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3601094' 00:30:51.361 killing process with pid 3601094 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3601094 00:30:51.361 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3601094 00:30:51.621 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:51.621 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:51.621 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:51.622 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:51.622 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:51.622 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:51.622 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:51.622 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:51.622 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:51.622 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.622 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.622 07:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.538 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:53.538 00:30:53.538 real 0m49.498s 00:30:53.538 user 3m4.014s 00:30:53.538 sys 0m22.254s 00:30:53.538 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:53.538 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:53.538 ************************************ 00:30:53.538 END TEST nvmf_ns_hotplug_stress 00:30:53.538 ************************************ 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:53.801 ************************************ 00:30:53.801 START TEST nvmf_delete_subsystem 00:30:53.801 ************************************ 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:53.801 * Looking for test storage... 00:30:53.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:53.801 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:53.801 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:53.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.801 --rc genhtml_branch_coverage=1 00:30:53.801 --rc genhtml_function_coverage=1 00:30:53.801 --rc genhtml_legend=1 00:30:53.801 --rc geninfo_all_blocks=1 00:30:53.801 --rc geninfo_unexecuted_blocks=1 00:30:53.801 00:30:53.801 ' 00:30:53.801 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:53.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.801 --rc genhtml_branch_coverage=1 00:30:53.801 --rc genhtml_function_coverage=1 00:30:53.801 --rc genhtml_legend=1 00:30:53.801 --rc geninfo_all_blocks=1 00:30:53.801 --rc geninfo_unexecuted_blocks=1 00:30:53.801 00:30:53.801 ' 00:30:53.801 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:53.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.801 --rc genhtml_branch_coverage=1 00:30:53.801 --rc genhtml_function_coverage=1 00:30:53.801 --rc genhtml_legend=1 00:30:53.801 --rc geninfo_all_blocks=1 00:30:53.801 --rc geninfo_unexecuted_blocks=1 00:30:53.801 00:30:53.801 ' 00:30:53.801 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:53.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.801 --rc genhtml_branch_coverage=1 00:30:53.801 --rc genhtml_function_coverage=1 00:30:53.801 --rc genhtml_legend=1 00:30:53.801 --rc geninfo_all_blocks=1 00:30:53.801 --rc geninfo_unexecuted_blocks=1 00:30:53.801 00:30:53.801 ' 00:30:53.801 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:53.801 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.064 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.065 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.065 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:54.065 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:54.065 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:54.065 07:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.209 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:02.209 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:02.209 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:02.209 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:02.209 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:02.209 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:02.210 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:02.210 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:02.210 Found net devices under 0000:31:00.0: cvl_0_0 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:02.210 Found net devices under 0000:31:00.1: cvl_0_1 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:02.210 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:02.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:02.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:31:02.211 00:31:02.211 --- 10.0.0.2 ping statistics --- 00:31:02.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.211 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:02.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:02.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:31:02.211 00:31:02.211 --- 10.0.0.1 ping statistics --- 00:31:02.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.211 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3613656 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3613656 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3613656 ']' 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:02.211 07:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.211 [2024-11-20 07:45:19.526994] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:02.211 [2024-11-20 07:45:19.528041] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:31:02.211 [2024-11-20 07:45:19.528083] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.211 [2024-11-20 07:45:19.627871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:02.211 [2024-11-20 07:45:19.666427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:02.211 [2024-11-20 07:45:19.666463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:02.211 [2024-11-20 07:45:19.666471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:02.211 [2024-11-20 07:45:19.666478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:02.211 [2024-11-20 07:45:19.666484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:02.211 [2024-11-20 07:45:19.667712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.211 [2024-11-20 07:45:19.667715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.211 [2024-11-20 07:45:19.724451] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:02.211 [2024-11-20 07:45:19.724915] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:02.211 [2024-11-20 07:45:19.725270] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:02.211 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:02.211 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:31:02.211 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:02.211 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:02.211 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.211 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:02.211 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:02.211 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.211 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.211 [2024-11-20 07:45:20.388766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.211 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.211 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:02.211 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.211 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.211 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.211 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.473 [2024-11-20 07:45:20.421143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.473 NULL1 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.473 Delay0 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3613880 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:02.473 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:02.473 [2024-11-20 07:45:20.519619] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:04.388 07:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:04.388 07:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.388 07:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Write completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 Read completed with error (sct=0, sc=8) 00:31:04.649 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 starting I/O failed: -6 00:31:04.650 starting I/O failed: -6 00:31:04.650 starting I/O failed: -6 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 starting I/O failed: -6 00:31:04.650 [2024-11-20 07:45:22.773810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9f8800d4b0 is same with the state(6) to be set 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Read completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:04.650 Write completed with error (sct=0, sc=8) 00:31:05.596 [2024-11-20 07:45:23.739845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9945e0 is same with the state(6) to be set 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 [2024-11-20 07:45:23.773408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9f8800d7e0 is same with the state(6) to be set 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 [2024-11-20 07:45:23.776083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9930e0 is same with the state(6) to be set 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 [2024-11-20 07:45:23.776369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9934a0 is same with the state(6) to be set 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Read completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 Write completed with error (sct=0, sc=8) 00:31:05.596 [2024-11-20 07:45:23.776496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9f8800d020 is same with the state(6) to be set 00:31:05.596 Initializing NVMe Controllers 00:31:05.596 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:05.596 Controller IO queue size 128, less than required. 00:31:05.596 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:05.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:05.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:05.596 Initialization complete. Launching workers. 00:31:05.596 ======================================================== 00:31:05.596 Latency(us) 00:31:05.596 Device Information : IOPS MiB/s Average min max 00:31:05.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.65 0.09 896021.47 308.41 1009018.10 00:31:05.596 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 142.36 0.07 968548.37 262.31 1044840.92 00:31:05.596 ======================================================== 00:31:05.596 Total : 332.00 0.16 927119.96 262.31 1044840.92 00:31:05.596 00:31:05.596 [2024-11-20 07:45:23.777133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9945e0 (9): Bad file descriptor 00:31:05.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:05.596 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.596 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:05.596 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3613880 00:31:05.597 07:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3613880 00:31:06.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3613880) - No such process 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3613880 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3613880 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3613880 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:06.168 [2024-11-20 07:45:24.313162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.168 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.169 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.169 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.169 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:06.169 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.169 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3614677 00:31:06.169 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:06.169 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3614677 00:31:06.169 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:06.169 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:06.429 [2024-11-20 07:45:24.412926] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:06.725 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:06.725 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3614677 00:31:06.725 07:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:07.368 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:07.368 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3614677 00:31:07.368 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:07.938 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:07.938 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3614677 00:31:07.938 07:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:08.198 07:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:08.198 07:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3614677 00:31:08.198 07:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:08.767 07:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:08.767 07:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3614677 00:31:08.767 07:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:09.338 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:09.338 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3614677 00:31:09.338 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:09.339 Initializing NVMe Controllers 00:31:09.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:09.339 Controller IO queue size 128, less than required. 00:31:09.339 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:09.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:09.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:09.339 Initialization complete. Launching workers. 00:31:09.339 ======================================================== 00:31:09.339 Latency(us) 00:31:09.339 Device Information : IOPS MiB/s Average min max 00:31:09.339 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002614.76 1000311.41 1041748.41 00:31:09.339 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004542.56 1000159.61 1041766.21 00:31:09.339 ======================================================== 00:31:09.339 Total : 256.00 0.12 1003578.66 1000159.61 1041766.21 00:31:09.339 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3614677 00:31:09.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3614677) - No such process 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3614677 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:09.910 rmmod nvme_tcp 00:31:09.910 rmmod nvme_fabrics 00:31:09.910 rmmod nvme_keyring 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3613656 ']' 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3613656 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3613656 ']' 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3613656 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:09.910 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3613656 00:31:09.910 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:09.910 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:09.910 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3613656' 00:31:09.910 killing process with pid 3613656 00:31:09.910 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3613656 00:31:09.910 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3613656 00:31:09.910 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:09.910 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:09.910 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:09.910 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:09.910 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:09.910 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:09.910 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:10.171 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:10.171 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:10.171 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.171 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.171 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.083 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:12.083 00:31:12.083 real 0m18.391s 00:31:12.083 user 0m26.669s 00:31:12.083 sys 0m7.525s 00:31:12.083 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:12.083 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:12.083 ************************************ 00:31:12.083 END TEST nvmf_delete_subsystem 00:31:12.083 ************************************ 00:31:12.083 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:12.083 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:12.083 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:12.083 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:12.083 ************************************ 00:31:12.083 START TEST nvmf_host_management 00:31:12.083 ************************************ 00:31:12.083 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:12.344 * Looking for test storage... 00:31:12.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:12.344 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:12.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.344 --rc genhtml_branch_coverage=1 00:31:12.344 --rc genhtml_function_coverage=1 00:31:12.344 --rc genhtml_legend=1 00:31:12.344 --rc geninfo_all_blocks=1 00:31:12.345 --rc geninfo_unexecuted_blocks=1 00:31:12.345 00:31:12.345 ' 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:12.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.345 --rc genhtml_branch_coverage=1 00:31:12.345 --rc genhtml_function_coverage=1 00:31:12.345 --rc genhtml_legend=1 00:31:12.345 --rc geninfo_all_blocks=1 00:31:12.345 --rc geninfo_unexecuted_blocks=1 00:31:12.345 00:31:12.345 ' 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:12.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.345 --rc genhtml_branch_coverage=1 00:31:12.345 --rc genhtml_function_coverage=1 00:31:12.345 --rc genhtml_legend=1 00:31:12.345 --rc geninfo_all_blocks=1 00:31:12.345 --rc geninfo_unexecuted_blocks=1 00:31:12.345 00:31:12.345 ' 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:12.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:12.345 --rc genhtml_branch_coverage=1 00:31:12.345 --rc genhtml_function_coverage=1 00:31:12.345 --rc genhtml_legend=1 00:31:12.345 --rc geninfo_all_blocks=1 00:31:12.345 --rc geninfo_unexecuted_blocks=1 00:31:12.345 00:31:12.345 ' 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:12.345 07:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:20.488 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:20.488 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:20.488 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:20.489 Found net devices under 0000:31:00.0: cvl_0_0 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:20.489 Found net devices under 0000:31:00.1: cvl_0_1 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.489 07:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:20.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:31:20.489 00:31:20.489 --- 10.0.0.2 ping statistics --- 00:31:20.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.489 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:31:20.489 00:31:20.489 --- 10.0.0.1 ping statistics --- 00:31:20.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.489 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3619398 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3619398 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3619398 ']' 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:20.489 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:20.489 [2024-11-20 07:45:38.145528] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:20.489 [2024-11-20 07:45:38.146662] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:31:20.489 [2024-11-20 07:45:38.146709] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.489 [2024-11-20 07:45:38.247400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:20.489 [2024-11-20 07:45:38.301150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.489 [2024-11-20 07:45:38.301195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.489 [2024-11-20 07:45:38.301204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.489 [2024-11-20 07:45:38.301211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.489 [2024-11-20 07:45:38.301218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.489 [2024-11-20 07:45:38.303575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:20.489 [2024-11-20 07:45:38.303736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:20.489 [2024-11-20 07:45:38.303870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:20.489 [2024-11-20 07:45:38.304004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.489 [2024-11-20 07:45:38.385009] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:20.489 [2024-11-20 07:45:38.385819] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:20.490 [2024-11-20 07:45:38.386254] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:20.490 [2024-11-20 07:45:38.386781] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:20.490 [2024-11-20 07:45:38.386838] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:20.751 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:20.751 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:20.751 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:20.751 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:20.751 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.011 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.011 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:21.011 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.011 07:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.011 [2024-11-20 07:45:39.001140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.011 Malloc0 00:31:21.011 [2024-11-20 07:45:39.105325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3619765 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3619765 /var/tmp/bdevperf.sock 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3619765 ']' 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:21.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:21.011 { 00:31:21.011 "params": { 00:31:21.011 "name": "Nvme$subsystem", 00:31:21.011 "trtype": "$TEST_TRANSPORT", 00:31:21.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:21.011 "adrfam": "ipv4", 00:31:21.011 "trsvcid": "$NVMF_PORT", 00:31:21.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:21.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:21.011 "hdgst": ${hdgst:-false}, 00:31:21.011 "ddgst": ${ddgst:-false} 00:31:21.011 }, 00:31:21.011 "method": "bdev_nvme_attach_controller" 00:31:21.011 } 00:31:21.011 EOF 00:31:21.011 )") 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:21.011 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:21.011 "params": { 00:31:21.011 "name": "Nvme0", 00:31:21.011 "trtype": "tcp", 00:31:21.011 "traddr": "10.0.0.2", 00:31:21.011 "adrfam": "ipv4", 00:31:21.011 "trsvcid": "4420", 00:31:21.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:21.011 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:21.011 "hdgst": false, 00:31:21.011 "ddgst": false 00:31:21.011 }, 00:31:21.011 "method": "bdev_nvme_attach_controller" 00:31:21.011 }' 00:31:21.011 [2024-11-20 07:45:39.212663] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:31:21.011 [2024-11-20 07:45:39.212733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3619765 ] 00:31:21.271 [2024-11-20 07:45:39.307143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.271 [2024-11-20 07:45:39.360210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.531 Running I/O for 10 seconds... 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=462 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 462 -ge 100 ']' 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.105 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:22.105 [2024-11-20 07:45:40.110401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.105 [2024-11-20 07:45:40.110776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.106 [2024-11-20 07:45:40.110783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.106 [2024-11-20 07:45:40.110790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.106 [2024-11-20 07:45:40.110798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.106 [2024-11-20 07:45:40.110805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.106 [2024-11-20 07:45:40.110812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.106 [2024-11-20 07:45:40.110819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.106 [2024-11-20 07:45:40.110827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.106 [2024-11-20 07:45:40.110834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.106 [2024-11-20 07:45:40.110844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.106 [2024-11-20 07:45:40.110851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771a80 is same with the state(6) to be set 00:31:22.106 [2024-11-20 07:45:40.111047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.106 [2024-11-20 07:45:40.111721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.106 [2024-11-20 07:45:40.111732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.111740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.111759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.111767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.111777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.111787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.111797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.111805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.111814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.111822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.111831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.111839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.111848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.111856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.111865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.111873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.111884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.111891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.111901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.111908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.111918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.111925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.111935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.111943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.111953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.111964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.111974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.111982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.111992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.112001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.112013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.112021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.112031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.112038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.112050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.112058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.112068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.112076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.112086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.112093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.112103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.112111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.112121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.112128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.112138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.112145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.112155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.112163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.112173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.112181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.112191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.112200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.112210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.112218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.112227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.112236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.112246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.112254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.112264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.107 [2024-11-20 07:45:40.112271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.107 [2024-11-20 07:45:40.112280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e8c60 is same with the state(6) to be set 00:31:22.107 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.107 [2024-11-20 07:45:40.113598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:22.107 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:22.107 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.107 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:22.107 task offset: 66816 on job bdev=Nvme0n1 fails 00:31:22.107 00:31:22.107 Latency(us) 00:31:22.107 [2024-11-20T06:45:40.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.107 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:22.107 Job: Nvme0n1 ended in about 0.42 seconds with error 00:31:22.107 Verification LBA range: start 0x0 length 0x400 00:31:22.107 Nvme0n1 : 0.42 1240.71 77.54 152.12 0.00 44591.48 4041.39 36918.61 00:31:22.107 [2024-11-20T06:45:40.317Z] =================================================================================================================== 00:31:22.107 [2024-11-20T06:45:40.317Z] Total : 1240.71 77.54 152.12 0.00 44591.48 4041.39 36918.61 00:31:22.107 [2024-11-20 07:45:40.115896] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:22.107 [2024-11-20 07:45:40.115941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d8280 (9): Bad file descriptor 00:31:22.107 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.107 07:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:22.107 [2024-11-20 07:45:40.208899] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:23.050 07:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3619765 00:31:23.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3619765) - No such process 00:31:23.050 07:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:23.050 07:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:23.050 07:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:23.050 07:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:23.050 07:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:23.050 07:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:23.050 07:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:23.050 07:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:23.050 { 00:31:23.050 "params": { 00:31:23.050 "name": "Nvme$subsystem", 00:31:23.050 "trtype": "$TEST_TRANSPORT", 00:31:23.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.050 "adrfam": "ipv4", 00:31:23.050 "trsvcid": "$NVMF_PORT", 00:31:23.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.050 "hdgst": ${hdgst:-false}, 00:31:23.050 "ddgst": ${ddgst:-false} 00:31:23.050 }, 00:31:23.050 "method": "bdev_nvme_attach_controller" 00:31:23.050 } 00:31:23.050 EOF 00:31:23.050 )") 00:31:23.050 07:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:23.050 07:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:23.050 07:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:23.050 07:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:23.050 "params": { 00:31:23.050 "name": "Nvme0", 00:31:23.050 "trtype": "tcp", 00:31:23.050 "traddr": "10.0.0.2", 00:31:23.050 "adrfam": "ipv4", 00:31:23.050 "trsvcid": "4420", 00:31:23.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:23.050 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:23.050 "hdgst": false, 00:31:23.050 "ddgst": false 00:31:23.050 }, 00:31:23.050 "method": "bdev_nvme_attach_controller" 00:31:23.050 }' 00:31:23.050 [2024-11-20 07:45:41.183821] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:31:23.050 [2024-11-20 07:45:41.183899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3620116 ] 00:31:23.310 [2024-11-20 07:45:41.279050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.310 [2024-11-20 07:45:41.331665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.570 Running I/O for 1 seconds... 00:31:24.513 2017.00 IOPS, 126.06 MiB/s 00:31:24.513 Latency(us) 00:31:24.513 [2024-11-20T06:45:42.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:24.513 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:24.513 Verification LBA range: start 0x0 length 0x400 00:31:24.513 Nvme0n1 : 1.05 1978.67 123.67 0.00 0.00 30488.30 3713.71 44782.93 00:31:24.513 [2024-11-20T06:45:42.723Z] =================================================================================================================== 00:31:24.513 [2024-11-20T06:45:42.723Z] Total : 1978.67 123.67 0.00 0.00 30488.30 3713.71 44782.93 00:31:24.513 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:24.513 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:24.513 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:24.513 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:24.513 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:24.513 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:24.513 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:24.513 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:24.513 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:24.513 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:24.513 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:24.513 rmmod nvme_tcp 00:31:24.774 rmmod nvme_fabrics 00:31:24.774 rmmod nvme_keyring 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3619398 ']' 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3619398 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3619398 ']' 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3619398 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3619398 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3619398' 00:31:24.774 killing process with pid 3619398 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3619398 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3619398 00:31:24.774 [2024-11-20 07:45:42.926466] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:24.774 07:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:27.322 00:31:27.322 real 0m14.759s 00:31:27.322 user 0m19.581s 00:31:27.322 sys 0m7.463s 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:27.322 ************************************ 00:31:27.322 END TEST nvmf_host_management 00:31:27.322 ************************************ 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:27.322 ************************************ 00:31:27.322 START TEST nvmf_lvol 00:31:27.322 ************************************ 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:27.322 * Looking for test storage... 00:31:27.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:27.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.322 --rc genhtml_branch_coverage=1 00:31:27.322 --rc genhtml_function_coverage=1 00:31:27.322 --rc genhtml_legend=1 00:31:27.322 --rc geninfo_all_blocks=1 00:31:27.322 --rc geninfo_unexecuted_blocks=1 00:31:27.322 00:31:27.322 ' 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:27.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.322 --rc genhtml_branch_coverage=1 00:31:27.322 --rc genhtml_function_coverage=1 00:31:27.322 --rc genhtml_legend=1 00:31:27.322 --rc geninfo_all_blocks=1 00:31:27.322 --rc geninfo_unexecuted_blocks=1 00:31:27.322 00:31:27.322 ' 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:27.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.322 --rc genhtml_branch_coverage=1 00:31:27.322 --rc genhtml_function_coverage=1 00:31:27.322 --rc genhtml_legend=1 00:31:27.322 --rc geninfo_all_blocks=1 00:31:27.322 --rc geninfo_unexecuted_blocks=1 00:31:27.322 00:31:27.322 ' 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:27.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.322 --rc genhtml_branch_coverage=1 00:31:27.322 --rc genhtml_function_coverage=1 00:31:27.322 --rc genhtml_legend=1 00:31:27.322 --rc geninfo_all_blocks=1 00:31:27.322 --rc geninfo_unexecuted_blocks=1 00:31:27.322 00:31:27.322 ' 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.322 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:27.323 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:35.470 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:35.470 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:35.470 Found net devices under 0000:31:00.0: cvl_0_0 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.470 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:35.471 Found net devices under 0000:31:00.1: cvl_0_1 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:35.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:35.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:31:35.471 00:31:35.471 --- 10.0.0.2 ping statistics --- 00:31:35.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.471 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:35.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:35.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:31:35.471 00:31:35.471 --- 10.0.0.1 ping statistics --- 00:31:35.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.471 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:35.471 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:35.471 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:35.471 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:35.471 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:35.471 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:35.471 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3624567 00:31:35.471 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3624567 00:31:35.471 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:35.471 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3624567 ']' 00:31:35.471 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.471 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:35.471 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.471 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:35.471 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:35.471 [2024-11-20 07:45:53.074837] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:35.471 [2024-11-20 07:45:53.075999] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:31:35.471 [2024-11-20 07:45:53.076050] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.471 [2024-11-20 07:45:53.174808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:35.471 [2024-11-20 07:45:53.227845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.471 [2024-11-20 07:45:53.227917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.471 [2024-11-20 07:45:53.227926] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.471 [2024-11-20 07:45:53.227933] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.471 [2024-11-20 07:45:53.227939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.471 [2024-11-20 07:45:53.229798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.471 [2024-11-20 07:45:53.229905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.471 [2024-11-20 07:45:53.229905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:35.471 [2024-11-20 07:45:53.308053] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:35.471 [2024-11-20 07:45:53.309116] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:35.471 [2024-11-20 07:45:53.309603] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:35.471 [2024-11-20 07:45:53.309766] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:35.733 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:35.733 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:31:35.733 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:35.733 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:35.733 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:35.733 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:35.733 07:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:35.993 [2024-11-20 07:45:54.102963] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.993 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:36.254 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:36.254 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:36.515 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:36.515 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:36.775 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:36.775 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ee3cd92f-c2bc-4297-8943-036b42c106e4 00:31:36.776 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ee3cd92f-c2bc-4297-8943-036b42c106e4 lvol 20 00:31:37.036 07:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=fdd57a26-bc5d-4157-be15-78bf63a96fc5 00:31:37.036 07:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:37.298 07:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fdd57a26-bc5d-4157-be15-78bf63a96fc5 00:31:37.559 07:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:37.559 [2024-11-20 07:45:55.686877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.559 07:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:37.819 07:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3625182 00:31:37.819 07:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:37.819 07:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:38.762 07:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot fdd57a26-bc5d-4157-be15-78bf63a96fc5 MY_SNAPSHOT 00:31:39.022 07:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e1720517-f88b-4e7e-a172-1e7a1b6f4cf9 00:31:39.022 07:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize fdd57a26-bc5d-4157-be15-78bf63a96fc5 30 00:31:39.283 07:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e1720517-f88b-4e7e-a172-1e7a1b6f4cf9 MY_CLONE 00:31:39.544 07:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=29cb6e86-274e-4f19-95dd-e110fe6d8dd7 00:31:39.544 07:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 29cb6e86-274e-4f19-95dd-e110fe6d8dd7 00:31:40.116 07:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3625182 00:31:48.255 Initializing NVMe Controllers 00:31:48.255 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:48.255 Controller IO queue size 128, less than required. 00:31:48.255 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:48.255 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:48.255 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:48.255 Initialization complete. Launching workers. 00:31:48.255 ======================================================== 00:31:48.255 Latency(us) 00:31:48.255 Device Information : IOPS MiB/s Average min max 00:31:48.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15366.64 60.03 8332.54 1761.41 62915.14 00:31:48.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15117.74 59.05 8466.44 4071.72 78538.95 00:31:48.255 ======================================================== 00:31:48.255 Total : 30484.38 119.08 8398.94 1761.41 78538.95 00:31:48.255 00:31:48.255 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:48.515 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fdd57a26-bc5d-4157-be15-78bf63a96fc5 00:31:48.515 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ee3cd92f-c2bc-4297-8943-036b42c106e4 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:48.776 rmmod nvme_tcp 00:31:48.776 rmmod nvme_fabrics 00:31:48.776 rmmod nvme_keyring 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3624567 ']' 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3624567 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3624567 ']' 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3624567 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3624567 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3624567' 00:31:48.776 killing process with pid 3624567 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3624567 00:31:48.776 07:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3624567 00:31:49.038 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:49.038 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:49.038 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:49.038 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:49.038 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:49.038 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:49.038 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:49.038 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:49.038 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:49.038 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.038 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.038 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.951 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:50.951 00:31:50.951 real 0m24.027s 00:31:50.951 user 0m56.016s 00:31:50.951 sys 0m10.864s 00:31:50.952 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:50.952 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:50.952 ************************************ 00:31:50.952 END TEST nvmf_lvol 00:31:50.952 ************************************ 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:51.211 ************************************ 00:31:51.211 START TEST nvmf_lvs_grow 00:31:51.211 ************************************ 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:51.211 * Looking for test storage... 00:31:51.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:51.211 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:51.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.472 --rc genhtml_branch_coverage=1 00:31:51.472 --rc genhtml_function_coverage=1 00:31:51.472 --rc genhtml_legend=1 00:31:51.472 --rc geninfo_all_blocks=1 00:31:51.472 --rc geninfo_unexecuted_blocks=1 00:31:51.472 00:31:51.472 ' 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:51.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.472 --rc genhtml_branch_coverage=1 00:31:51.472 --rc genhtml_function_coverage=1 00:31:51.472 --rc genhtml_legend=1 00:31:51.472 --rc geninfo_all_blocks=1 00:31:51.472 --rc geninfo_unexecuted_blocks=1 00:31:51.472 00:31:51.472 ' 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:51.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.472 --rc genhtml_branch_coverage=1 00:31:51.472 --rc genhtml_function_coverage=1 00:31:51.472 --rc genhtml_legend=1 00:31:51.472 --rc geninfo_all_blocks=1 00:31:51.472 --rc geninfo_unexecuted_blocks=1 00:31:51.472 00:31:51.472 ' 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:51.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.472 --rc genhtml_branch_coverage=1 00:31:51.472 --rc genhtml_function_coverage=1 00:31:51.472 --rc genhtml_legend=1 00:31:51.472 --rc geninfo_all_blocks=1 00:31:51.472 --rc geninfo_unexecuted_blocks=1 00:31:51.472 00:31:51.472 ' 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:51.472 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:51.473 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:51.473 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.614 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:59.615 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:59.615 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:59.615 Found net devices under 0000:31:00.0: cvl_0_0 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:59.615 Found net devices under 0000:31:00.1: cvl_0_1 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:59.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:31:59.615 00:31:59.615 --- 10.0.0.2 ping statistics --- 00:31:59.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.615 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:31:59.615 00:31:59.615 --- 10.0.0.1 ping statistics --- 00:31:59.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.615 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:59.615 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:59.616 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:59.616 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:59.616 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3631508 00:31:59.616 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3631508 00:31:59.616 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:59.616 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3631508 ']' 00:31:59.616 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.616 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:59.616 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.616 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:59.616 07:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:59.616 [2024-11-20 07:46:17.028078] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:59.616 [2024-11-20 07:46:17.029222] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:31:59.616 [2024-11-20 07:46:17.029274] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.616 [2024-11-20 07:46:17.130329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.616 [2024-11-20 07:46:17.181424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.616 [2024-11-20 07:46:17.181473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.616 [2024-11-20 07:46:17.181482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.616 [2024-11-20 07:46:17.181489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.616 [2024-11-20 07:46:17.181496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.616 [2024-11-20 07:46:17.182327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.616 [2024-11-20 07:46:17.260113] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:59.616 [2024-11-20 07:46:17.260390] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:59.877 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:59.877 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:31:59.877 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:59.877 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:59.877 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:59.877 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.877 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:59.877 [2024-11-20 07:46:18.043235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.877 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:59.877 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:59.877 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:59.877 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:00.138 ************************************ 00:32:00.138 START TEST lvs_grow_clean 00:32:00.138 ************************************ 00:32:00.138 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:32:00.138 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:00.138 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:00.138 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:00.138 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:00.138 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:00.138 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:00.138 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:00.138 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:00.138 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:00.399 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:00.399 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:00.400 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=073183ce-1979-4ae9-b5ec-c055c15f9214 00:32:00.400 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 073183ce-1979-4ae9-b5ec-c055c15f9214 00:32:00.400 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:00.661 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:00.661 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:00.661 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 073183ce-1979-4ae9-b5ec-c055c15f9214 lvol 150 00:32:00.923 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=230345cf-406a-4fd1-9a4d-bec46a38a7c8 00:32:00.923 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:00.923 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:00.923 [2024-11-20 07:46:19.078916] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:00.923 [2024-11-20 07:46:19.079075] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:00.923 true 00:32:00.923 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 073183ce-1979-4ae9-b5ec-c055c15f9214 00:32:00.923 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:01.182 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:01.182 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:01.443 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 230345cf-406a-4fd1-9a4d-bec46a38a7c8 00:32:01.704 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:01.704 [2024-11-20 07:46:19.803543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.704 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:01.965 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:01.965 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3631959 00:32:01.965 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:01.965 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3631959 /var/tmp/bdevperf.sock 00:32:01.965 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3631959 ']' 00:32:01.965 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:01.965 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:01.965 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:01.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:01.965 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:01.965 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:01.965 [2024-11-20 07:46:20.030381] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:32:01.965 [2024-11-20 07:46:20.030451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631959 ] 00:32:01.965 [2024-11-20 07:46:20.127198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.227 [2024-11-20 07:46:20.182019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:02.800 07:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:02.800 07:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:32:02.800 07:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:03.372 Nvme0n1 00:32:03.372 07:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:03.372 [ 00:32:03.372 { 00:32:03.372 "name": "Nvme0n1", 00:32:03.372 "aliases": [ 00:32:03.372 "230345cf-406a-4fd1-9a4d-bec46a38a7c8" 00:32:03.372 ], 00:32:03.372 "product_name": "NVMe disk", 00:32:03.372 "block_size": 4096, 00:32:03.372 "num_blocks": 38912, 00:32:03.372 "uuid": "230345cf-406a-4fd1-9a4d-bec46a38a7c8", 00:32:03.372 "numa_id": 0, 00:32:03.372 "assigned_rate_limits": { 00:32:03.372 "rw_ios_per_sec": 0, 00:32:03.372 "rw_mbytes_per_sec": 0, 00:32:03.372 "r_mbytes_per_sec": 0, 00:32:03.372 "w_mbytes_per_sec": 0 00:32:03.372 }, 00:32:03.372 "claimed": false, 00:32:03.372 "zoned": false, 00:32:03.372 "supported_io_types": { 00:32:03.372 "read": true, 00:32:03.372 "write": true, 00:32:03.372 "unmap": true, 00:32:03.372 "flush": true, 00:32:03.372 "reset": true, 00:32:03.372 "nvme_admin": true, 00:32:03.372 "nvme_io": true, 00:32:03.372 "nvme_io_md": false, 00:32:03.372 "write_zeroes": true, 00:32:03.372 "zcopy": false, 00:32:03.372 "get_zone_info": false, 00:32:03.372 "zone_management": false, 00:32:03.372 "zone_append": false, 00:32:03.372 "compare": true, 00:32:03.372 "compare_and_write": true, 00:32:03.372 "abort": true, 00:32:03.372 "seek_hole": false, 00:32:03.372 "seek_data": false, 00:32:03.372 "copy": true, 00:32:03.372 "nvme_iov_md": false 00:32:03.372 }, 00:32:03.372 "memory_domains": [ 00:32:03.372 { 00:32:03.372 "dma_device_id": "system", 00:32:03.372 "dma_device_type": 1 00:32:03.372 } 00:32:03.373 ], 00:32:03.373 "driver_specific": { 00:32:03.373 "nvme": [ 00:32:03.373 { 00:32:03.373 "trid": { 00:32:03.373 "trtype": "TCP", 00:32:03.373 "adrfam": "IPv4", 00:32:03.373 "traddr": "10.0.0.2", 00:32:03.373 "trsvcid": "4420", 00:32:03.373 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:03.373 }, 00:32:03.373 "ctrlr_data": { 00:32:03.373 "cntlid": 1, 00:32:03.373 "vendor_id": "0x8086", 00:32:03.373 "model_number": "SPDK bdev Controller", 00:32:03.373 "serial_number": "SPDK0", 00:32:03.373 "firmware_revision": "25.01", 00:32:03.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:03.373 "oacs": { 00:32:03.373 "security": 0, 00:32:03.373 "format": 0, 00:32:03.373 "firmware": 0, 00:32:03.373 "ns_manage": 0 00:32:03.373 }, 00:32:03.373 "multi_ctrlr": true, 00:32:03.373 "ana_reporting": false 00:32:03.373 }, 00:32:03.373 "vs": { 00:32:03.373 "nvme_version": "1.3" 00:32:03.373 }, 00:32:03.373 "ns_data": { 00:32:03.373 "id": 1, 00:32:03.373 "can_share": true 00:32:03.373 } 00:32:03.373 } 00:32:03.373 ], 00:32:03.373 "mp_policy": "active_passive" 00:32:03.373 } 00:32:03.373 } 00:32:03.373 ] 00:32:03.373 07:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3632277 00:32:03.373 07:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:03.373 07:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:03.634 Running I/O for 10 seconds... 00:32:04.578 Latency(us) 00:32:04.578 [2024-11-20T06:46:22.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:04.578 Nvme0n1 : 1.00 16493.00 64.43 0.00 0.00 0.00 0.00 0.00 00:32:04.578 [2024-11-20T06:46:22.788Z] =================================================================================================================== 00:32:04.578 [2024-11-20T06:46:22.788Z] Total : 16493.00 64.43 0.00 0.00 0.00 0.00 0.00 00:32:04.578 00:32:05.521 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 073183ce-1979-4ae9-b5ec-c055c15f9214 00:32:05.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:05.521 Nvme0n1 : 2.00 16718.50 65.31 0.00 0.00 0.00 0.00 0.00 00:32:05.521 [2024-11-20T06:46:23.731Z] =================================================================================================================== 00:32:05.521 [2024-11-20T06:46:23.731Z] Total : 16718.50 65.31 0.00 0.00 0.00 0.00 0.00 00:32:05.521 00:32:05.521 true 00:32:05.521 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 073183ce-1979-4ae9-b5ec-c055c15f9214 00:32:05.521 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:05.781 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:05.782 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:05.782 07:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3632277 00:32:06.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:06.814 Nvme0n1 : 3.00 16991.00 66.37 0.00 0.00 0.00 0.00 0.00 00:32:06.814 [2024-11-20T06:46:25.024Z] =================================================================================================================== 00:32:06.814 [2024-11-20T06:46:25.024Z] Total : 16991.00 66.37 0.00 0.00 0.00 0.00 0.00 00:32:06.814 00:32:07.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:07.755 Nvme0n1 : 4.00 17155.25 67.01 0.00 0.00 0.00 0.00 0.00 00:32:07.755 [2024-11-20T06:46:25.965Z] =================================================================================================================== 00:32:07.755 [2024-11-20T06:46:25.965Z] Total : 17155.25 67.01 0.00 0.00 0.00 0.00 0.00 00:32:07.755 00:32:08.696 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:08.696 Nvme0n1 : 5.00 17797.80 69.52 0.00 0.00 0.00 0.00 0.00 00:32:08.696 [2024-11-20T06:46:26.906Z] =================================================================================================================== 00:32:08.696 [2024-11-20T06:46:26.906Z] Total : 17797.80 69.52 0.00 0.00 0.00 0.00 0.00 00:32:08.696 00:32:09.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:09.639 Nvme0n1 : 6.00 18986.17 74.16 0.00 0.00 0.00 0.00 0.00 00:32:09.639 [2024-11-20T06:46:27.849Z] =================================================================================================================== 00:32:09.639 [2024-11-20T06:46:27.849Z] Total : 18986.17 74.16 0.00 0.00 0.00 0.00 0.00 00:32:09.639 00:32:10.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:10.581 Nvme0n1 : 7.00 19837.29 77.49 0.00 0.00 0.00 0.00 0.00 00:32:10.581 [2024-11-20T06:46:28.791Z] =================================================================================================================== 00:32:10.581 [2024-11-20T06:46:28.791Z] Total : 19837.29 77.49 0.00 0.00 0.00 0.00 0.00 00:32:10.581 00:32:11.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:11.522 Nvme0n1 : 8.00 20477.62 79.99 0.00 0.00 0.00 0.00 0.00 00:32:11.522 [2024-11-20T06:46:29.732Z] =================================================================================================================== 00:32:11.522 [2024-11-20T06:46:29.732Z] Total : 20477.62 79.99 0.00 0.00 0.00 0.00 0.00 00:32:11.522 00:32:12.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:12.464 Nvme0n1 : 9.00 20963.22 81.89 0.00 0.00 0.00 0.00 0.00 00:32:12.464 [2024-11-20T06:46:30.674Z] =================================================================================================================== 00:32:12.464 [2024-11-20T06:46:30.674Z] Total : 20963.22 81.89 0.00 0.00 0.00 0.00 0.00 00:32:12.464 00:32:13.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:13.848 Nvme0n1 : 10.00 21364.50 83.46 0.00 0.00 0.00 0.00 0.00 00:32:13.848 [2024-11-20T06:46:32.058Z] =================================================================================================================== 00:32:13.848 [2024-11-20T06:46:32.058Z] Total : 21364.50 83.46 0.00 0.00 0.00 0.00 0.00 00:32:13.848 00:32:13.848 00:32:13.848 Latency(us) 00:32:13.848 [2024-11-20T06:46:32.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:13.848 Nvme0n1 : 10.00 21365.89 83.46 0.00 0.00 5986.72 3850.24 22828.37 00:32:13.848 [2024-11-20T06:46:32.058Z] =================================================================================================================== 00:32:13.848 [2024-11-20T06:46:32.058Z] Total : 21365.89 83.46 0.00 0.00 5986.72 3850.24 22828.37 00:32:13.848 { 00:32:13.848 "results": [ 00:32:13.848 { 00:32:13.848 "job": "Nvme0n1", 00:32:13.848 "core_mask": "0x2", 00:32:13.848 "workload": "randwrite", 00:32:13.848 "status": "finished", 00:32:13.848 "queue_depth": 128, 00:32:13.848 "io_size": 4096, 00:32:13.848 "runtime": 10.004592, 00:32:13.848 "iops": 21365.888783870447, 00:32:13.848 "mibps": 83.46050306199393, 00:32:13.848 "io_failed": 0, 00:32:13.848 "io_timeout": 0, 00:32:13.848 "avg_latency_us": 5986.721241783895, 00:32:13.848 "min_latency_us": 3850.24, 00:32:13.848 "max_latency_us": 22828.373333333333 00:32:13.848 } 00:32:13.848 ], 00:32:13.848 "core_count": 1 00:32:13.848 } 00:32:13.848 07:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3631959 00:32:13.848 07:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3631959 ']' 00:32:13.848 07:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3631959 00:32:13.848 07:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:32:13.848 07:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:13.848 07:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3631959 00:32:13.848 07:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:13.848 07:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:13.848 07:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3631959' 00:32:13.848 killing process with pid 3631959 00:32:13.848 07:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3631959 00:32:13.848 Received shutdown signal, test time was about 10.000000 seconds 00:32:13.848 00:32:13.848 Latency(us) 00:32:13.848 [2024-11-20T06:46:32.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.848 [2024-11-20T06:46:32.058Z] =================================================================================================================== 00:32:13.848 [2024-11-20T06:46:32.058Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:13.848 07:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3631959 00:32:13.848 07:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:13.848 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:14.109 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 073183ce-1979-4ae9-b5ec-c055c15f9214 00:32:14.109 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:14.370 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:14.370 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:14.370 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:14.370 [2024-11-20 07:46:32.530974] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:14.370 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 073183ce-1979-4ae9-b5ec-c055c15f9214 00:32:14.370 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:32:14.370 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 073183ce-1979-4ae9-b5ec-c055c15f9214 00:32:14.370 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:14.370 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:14.370 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:14.370 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:14.370 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:14.631 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:14.631 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:14.631 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:14.631 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 073183ce-1979-4ae9-b5ec-c055c15f9214 00:32:14.631 request: 00:32:14.631 { 00:32:14.631 "uuid": "073183ce-1979-4ae9-b5ec-c055c15f9214", 00:32:14.631 "method": "bdev_lvol_get_lvstores", 00:32:14.631 "req_id": 1 00:32:14.631 } 00:32:14.631 Got JSON-RPC error response 00:32:14.631 response: 00:32:14.631 { 00:32:14.631 "code": -19, 00:32:14.631 "message": "No such device" 00:32:14.631 } 00:32:14.631 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:32:14.631 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:14.631 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:14.631 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:14.631 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:14.891 aio_bdev 00:32:14.891 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 230345cf-406a-4fd1-9a4d-bec46a38a7c8 00:32:14.891 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=230345cf-406a-4fd1-9a4d-bec46a38a7c8 00:32:14.891 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:14.891 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:32:14.891 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:14.891 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:14.891 07:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:15.152 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 230345cf-406a-4fd1-9a4d-bec46a38a7c8 -t 2000 00:32:15.152 [ 00:32:15.152 { 00:32:15.152 "name": "230345cf-406a-4fd1-9a4d-bec46a38a7c8", 00:32:15.152 "aliases": [ 00:32:15.152 "lvs/lvol" 00:32:15.152 ], 00:32:15.152 "product_name": "Logical Volume", 00:32:15.152 "block_size": 4096, 00:32:15.152 "num_blocks": 38912, 00:32:15.152 "uuid": "230345cf-406a-4fd1-9a4d-bec46a38a7c8", 00:32:15.152 "assigned_rate_limits": { 00:32:15.152 "rw_ios_per_sec": 0, 00:32:15.152 "rw_mbytes_per_sec": 0, 00:32:15.152 "r_mbytes_per_sec": 0, 00:32:15.152 "w_mbytes_per_sec": 0 00:32:15.152 }, 00:32:15.152 "claimed": false, 00:32:15.152 "zoned": false, 00:32:15.152 "supported_io_types": { 00:32:15.152 "read": true, 00:32:15.152 "write": true, 00:32:15.152 "unmap": true, 00:32:15.152 "flush": false, 00:32:15.152 "reset": true, 00:32:15.152 "nvme_admin": false, 00:32:15.152 "nvme_io": false, 00:32:15.152 "nvme_io_md": false, 00:32:15.152 "write_zeroes": true, 00:32:15.152 "zcopy": false, 00:32:15.152 "get_zone_info": false, 00:32:15.152 "zone_management": false, 00:32:15.152 "zone_append": false, 00:32:15.152 "compare": false, 00:32:15.152 "compare_and_write": false, 00:32:15.152 "abort": false, 00:32:15.152 "seek_hole": true, 00:32:15.152 "seek_data": true, 00:32:15.152 "copy": false, 00:32:15.152 "nvme_iov_md": false 00:32:15.152 }, 00:32:15.152 "driver_specific": { 00:32:15.152 "lvol": { 00:32:15.152 "lvol_store_uuid": "073183ce-1979-4ae9-b5ec-c055c15f9214", 00:32:15.152 "base_bdev": "aio_bdev", 00:32:15.152 "thin_provision": false, 00:32:15.152 "num_allocated_clusters": 38, 00:32:15.152 "snapshot": false, 00:32:15.152 "clone": false, 00:32:15.152 "esnap_clone": false 00:32:15.152 } 00:32:15.152 } 00:32:15.152 } 00:32:15.152 ] 00:32:15.152 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:32:15.152 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 073183ce-1979-4ae9-b5ec-c055c15f9214 00:32:15.152 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:15.412 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:15.412 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 073183ce-1979-4ae9-b5ec-c055c15f9214 00:32:15.412 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:15.672 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:15.672 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 230345cf-406a-4fd1-9a4d-bec46a38a7c8 00:32:15.672 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 073183ce-1979-4ae9-b5ec-c055c15f9214 00:32:15.933 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:16.194 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:16.194 00:32:16.194 real 0m16.128s 00:32:16.194 user 0m15.232s 00:32:16.194 sys 0m1.776s 00:32:16.194 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:16.194 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:16.194 ************************************ 00:32:16.194 END TEST lvs_grow_clean 00:32:16.194 ************************************ 00:32:16.194 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:16.194 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:16.194 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:16.194 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:16.194 ************************************ 00:32:16.194 START TEST lvs_grow_dirty 00:32:16.194 ************************************ 00:32:16.194 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:32:16.194 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:16.194 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:16.194 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:16.194 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:16.194 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:16.194 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:16.194 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:16.194 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:16.194 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:16.455 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:16.455 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:16.716 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f4116a3c-93a8-4eb4-a96f-7ce0503ef033 00:32:16.716 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4116a3c-93a8-4eb4-a96f-7ce0503ef033 00:32:16.716 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:16.717 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:16.717 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:16.717 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f4116a3c-93a8-4eb4-a96f-7ce0503ef033 lvol 150 00:32:16.977 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7130fab9-15f5-4231-aad5-7e552a393ad6 00:32:16.977 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:16.977 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:17.237 [2024-11-20 07:46:35.238920] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:17.237 [2024-11-20 07:46:35.239080] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:17.237 true 00:32:17.237 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4116a3c-93a8-4eb4-a96f-7ce0503ef033 00:32:17.237 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:17.237 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:17.237 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:17.497 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7130fab9-15f5-4231-aad5-7e552a393ad6 00:32:17.758 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:17.758 [2024-11-20 07:46:35.947473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.758 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:18.020 07:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3635022 00:32:18.020 07:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:18.020 07:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:18.020 07:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3635022 /var/tmp/bdevperf.sock 00:32:18.020 07:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3635022 ']' 00:32:18.020 07:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:18.020 07:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:18.020 07:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:18.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:18.020 07:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:18.020 07:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:18.020 [2024-11-20 07:46:36.184395] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:32:18.020 [2024-11-20 07:46:36.184462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635022 ] 00:32:18.281 [2024-11-20 07:46:36.273826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.281 [2024-11-20 07:46:36.310404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.852 07:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:18.852 07:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:18.852 07:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:19.112 Nvme0n1 00:32:19.373 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:19.373 [ 00:32:19.373 { 00:32:19.373 "name": "Nvme0n1", 00:32:19.373 "aliases": [ 00:32:19.373 "7130fab9-15f5-4231-aad5-7e552a393ad6" 00:32:19.373 ], 00:32:19.373 "product_name": "NVMe disk", 00:32:19.373 "block_size": 4096, 00:32:19.373 "num_blocks": 38912, 00:32:19.373 "uuid": "7130fab9-15f5-4231-aad5-7e552a393ad6", 00:32:19.373 "numa_id": 0, 00:32:19.373 "assigned_rate_limits": { 00:32:19.373 "rw_ios_per_sec": 0, 00:32:19.373 "rw_mbytes_per_sec": 0, 00:32:19.373 "r_mbytes_per_sec": 0, 00:32:19.373 "w_mbytes_per_sec": 0 00:32:19.373 }, 00:32:19.373 "claimed": false, 00:32:19.373 "zoned": false, 00:32:19.373 "supported_io_types": { 00:32:19.373 "read": true, 00:32:19.373 "write": true, 00:32:19.373 "unmap": true, 00:32:19.373 "flush": true, 00:32:19.373 "reset": true, 00:32:19.373 "nvme_admin": true, 00:32:19.373 "nvme_io": true, 00:32:19.373 "nvme_io_md": false, 00:32:19.373 "write_zeroes": true, 00:32:19.373 "zcopy": false, 00:32:19.373 "get_zone_info": false, 00:32:19.373 "zone_management": false, 00:32:19.373 "zone_append": false, 00:32:19.373 "compare": true, 00:32:19.373 "compare_and_write": true, 00:32:19.373 "abort": true, 00:32:19.373 "seek_hole": false, 00:32:19.373 "seek_data": false, 00:32:19.373 "copy": true, 00:32:19.373 "nvme_iov_md": false 00:32:19.373 }, 00:32:19.373 "memory_domains": [ 00:32:19.373 { 00:32:19.373 "dma_device_id": "system", 00:32:19.373 "dma_device_type": 1 00:32:19.373 } 00:32:19.373 ], 00:32:19.373 "driver_specific": { 00:32:19.373 "nvme": [ 00:32:19.373 { 00:32:19.373 "trid": { 00:32:19.373 "trtype": "TCP", 00:32:19.373 "adrfam": "IPv4", 00:32:19.373 "traddr": "10.0.0.2", 00:32:19.373 "trsvcid": "4420", 00:32:19.373 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:19.373 }, 00:32:19.373 "ctrlr_data": { 00:32:19.373 "cntlid": 1, 00:32:19.373 "vendor_id": "0x8086", 00:32:19.373 "model_number": "SPDK bdev Controller", 00:32:19.373 "serial_number": "SPDK0", 00:32:19.373 "firmware_revision": "25.01", 00:32:19.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:19.373 "oacs": { 00:32:19.373 "security": 0, 00:32:19.373 "format": 0, 00:32:19.373 "firmware": 0, 00:32:19.373 "ns_manage": 0 00:32:19.373 }, 00:32:19.373 "multi_ctrlr": true, 00:32:19.373 "ana_reporting": false 00:32:19.373 }, 00:32:19.373 "vs": { 00:32:19.373 "nvme_version": "1.3" 00:32:19.373 }, 00:32:19.373 "ns_data": { 00:32:19.373 "id": 1, 00:32:19.373 "can_share": true 00:32:19.373 } 00:32:19.373 } 00:32:19.373 ], 00:32:19.373 "mp_policy": "active_passive" 00:32:19.373 } 00:32:19.373 } 00:32:19.373 ] 00:32:19.373 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3635359 00:32:19.373 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:19.373 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:19.373 Running I/O for 10 seconds... 00:32:20.758 Latency(us) 00:32:20.758 [2024-11-20T06:46:38.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:20.758 Nvme0n1 : 1.00 17536.00 68.50 0.00 0.00 0.00 0.00 0.00 00:32:20.758 [2024-11-20T06:46:38.968Z] =================================================================================================================== 00:32:20.758 [2024-11-20T06:46:38.968Z] Total : 17536.00 68.50 0.00 0.00 0.00 0.00 0.00 00:32:20.758 00:32:21.330 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f4116a3c-93a8-4eb4-a96f-7ce0503ef033 00:32:21.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:21.590 Nvme0n1 : 2.00 17785.00 69.47 0.00 0.00 0.00 0.00 0.00 00:32:21.590 [2024-11-20T06:46:39.800Z] =================================================================================================================== 00:32:21.591 [2024-11-20T06:46:39.801Z] Total : 17785.00 69.47 0.00 0.00 0.00 0.00 0.00 00:32:21.591 00:32:21.591 true 00:32:21.591 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4116a3c-93a8-4eb4-a96f-7ce0503ef033 00:32:21.591 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:21.853 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:21.853 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:21.853 07:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3635359 00:32:22.425 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:22.425 Nvme0n1 : 3.00 17847.00 69.71 0.00 0.00 0.00 0.00 0.00 00:32:22.425 [2024-11-20T06:46:40.635Z] =================================================================================================================== 00:32:22.425 [2024-11-20T06:46:40.635Z] Total : 17847.00 69.71 0.00 0.00 0.00 0.00 0.00 00:32:22.425 00:32:23.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.811 Nvme0n1 : 4.00 17909.50 69.96 0.00 0.00 0.00 0.00 0.00 00:32:23.811 [2024-11-20T06:46:42.021Z] =================================================================================================================== 00:32:23.811 [2024-11-20T06:46:42.021Z] Total : 17909.50 69.96 0.00 0.00 0.00 0.00 0.00 00:32:23.811 00:32:24.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.382 Nvme0n1 : 5.00 18620.20 72.74 0.00 0.00 0.00 0.00 0.00 00:32:24.382 [2024-11-20T06:46:42.592Z] =================================================================================================================== 00:32:24.382 [2024-11-20T06:46:42.592Z] Total : 18620.20 72.74 0.00 0.00 0.00 0.00 0.00 00:32:24.382 00:32:25.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:25.768 Nvme0n1 : 6.00 19760.83 77.19 0.00 0.00 0.00 0.00 0.00 00:32:25.768 [2024-11-20T06:46:43.978Z] =================================================================================================================== 00:32:25.768 [2024-11-20T06:46:43.978Z] Total : 19760.83 77.19 0.00 0.00 0.00 0.00 0.00 00:32:25.768 00:32:26.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.708 Nvme0n1 : 7.00 20582.43 80.40 0.00 0.00 0.00 0.00 0.00 00:32:26.708 [2024-11-20T06:46:44.918Z] =================================================================================================================== 00:32:26.708 [2024-11-20T06:46:44.919Z] Total : 20582.43 80.40 0.00 0.00 0.00 0.00 0.00 00:32:26.709 00:32:27.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.651 Nvme0n1 : 8.00 21200.50 82.81 0.00 0.00 0.00 0.00 0.00 00:32:27.651 [2024-11-20T06:46:45.861Z] =================================================================================================================== 00:32:27.651 [2024-11-20T06:46:45.861Z] Total : 21200.50 82.81 0.00 0.00 0.00 0.00 0.00 00:32:27.651 00:32:28.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:28.592 Nvme0n1 : 9.00 21681.22 84.69 0.00 0.00 0.00 0.00 0.00 00:32:28.592 [2024-11-20T06:46:46.802Z] =================================================================================================================== 00:32:28.592 [2024-11-20T06:46:46.802Z] Total : 21681.22 84.69 0.00 0.00 0.00 0.00 0.00 00:32:28.592 00:32:29.535 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:29.535 Nvme0n1 : 10.00 22065.80 86.19 0.00 0.00 0.00 0.00 0.00 00:32:29.535 [2024-11-20T06:46:47.745Z] =================================================================================================================== 00:32:29.535 [2024-11-20T06:46:47.745Z] Total : 22065.80 86.19 0.00 0.00 0.00 0.00 0.00 00:32:29.535 00:32:29.535 00:32:29.535 Latency(us) 00:32:29.535 [2024-11-20T06:46:47.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.535 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:29.535 Nvme0n1 : 10.00 22071.42 86.22 0.00 0.00 5797.01 2894.51 27306.67 00:32:29.535 [2024-11-20T06:46:47.745Z] =================================================================================================================== 00:32:29.535 [2024-11-20T06:46:47.745Z] Total : 22071.42 86.22 0.00 0.00 5797.01 2894.51 27306.67 00:32:29.535 { 00:32:29.535 "results": [ 00:32:29.535 { 00:32:29.535 "job": "Nvme0n1", 00:32:29.535 "core_mask": "0x2", 00:32:29.535 "workload": "randwrite", 00:32:29.535 "status": "finished", 00:32:29.535 "queue_depth": 128, 00:32:29.535 "io_size": 4096, 00:32:29.535 "runtime": 10.003255, 00:32:29.535 "iops": 22071.415754172016, 00:32:29.535 "mibps": 86.21646778973444, 00:32:29.535 "io_failed": 0, 00:32:29.535 "io_timeout": 0, 00:32:29.535 "avg_latency_us": 5797.0139642912145, 00:32:29.535 "min_latency_us": 2894.5066666666667, 00:32:29.535 "max_latency_us": 27306.666666666668 00:32:29.535 } 00:32:29.535 ], 00:32:29.535 "core_count": 1 00:32:29.535 } 00:32:29.535 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3635022 00:32:29.535 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3635022 ']' 00:32:29.535 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3635022 00:32:29.535 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:32:29.535 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:29.535 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3635022 00:32:29.535 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:29.535 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:29.535 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3635022' 00:32:29.535 killing process with pid 3635022 00:32:29.535 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3635022 00:32:29.535 Received shutdown signal, test time was about 10.000000 seconds 00:32:29.535 00:32:29.535 Latency(us) 00:32:29.535 [2024-11-20T06:46:47.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.535 [2024-11-20T06:46:47.745Z] =================================================================================================================== 00:32:29.535 [2024-11-20T06:46:47.745Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:29.535 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3635022 00:32:29.797 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:29.797 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:30.058 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4116a3c-93a8-4eb4-a96f-7ce0503ef033 00:32:30.058 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3631508 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3631508 00:32:30.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3631508 Killed "${NVMF_APP[@]}" "$@" 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3637370 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3637370 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3637370 ']' 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:30.319 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:30.319 [2024-11-20 07:46:48.408949] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:30.319 [2024-11-20 07:46:48.410026] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:32:30.319 [2024-11-20 07:46:48.410071] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:30.319 [2024-11-20 07:46:48.504650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.579 [2024-11-20 07:46:48.537249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:30.579 [2024-11-20 07:46:48.537279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:30.579 [2024-11-20 07:46:48.537286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:30.579 [2024-11-20 07:46:48.537291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:30.579 [2024-11-20 07:46:48.537295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:30.579 [2024-11-20 07:46:48.537797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.579 [2024-11-20 07:46:48.590263] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:30.579 [2024-11-20 07:46:48.590452] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:31.151 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:31.151 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:31.151 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:31.151 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:31.151 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:31.151 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.151 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:31.412 [2024-11-20 07:46:49.423909] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:31.412 [2024-11-20 07:46:49.424129] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:31.412 [2024-11-20 07:46:49.424218] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:31.412 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:31.412 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7130fab9-15f5-4231-aad5-7e552a393ad6 00:32:31.412 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=7130fab9-15f5-4231-aad5-7e552a393ad6 00:32:31.412 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:31.412 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:31.412 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:31.412 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:31.412 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:31.673 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7130fab9-15f5-4231-aad5-7e552a393ad6 -t 2000 00:32:31.673 [ 00:32:31.673 { 00:32:31.673 "name": "7130fab9-15f5-4231-aad5-7e552a393ad6", 00:32:31.673 "aliases": [ 00:32:31.673 "lvs/lvol" 00:32:31.673 ], 00:32:31.673 "product_name": "Logical Volume", 00:32:31.673 "block_size": 4096, 00:32:31.673 "num_blocks": 38912, 00:32:31.673 "uuid": "7130fab9-15f5-4231-aad5-7e552a393ad6", 00:32:31.673 "assigned_rate_limits": { 00:32:31.673 "rw_ios_per_sec": 0, 00:32:31.673 "rw_mbytes_per_sec": 0, 00:32:31.673 "r_mbytes_per_sec": 0, 00:32:31.673 "w_mbytes_per_sec": 0 00:32:31.673 }, 00:32:31.673 "claimed": false, 00:32:31.673 "zoned": false, 00:32:31.673 "supported_io_types": { 00:32:31.673 "read": true, 00:32:31.673 "write": true, 00:32:31.673 "unmap": true, 00:32:31.673 "flush": false, 00:32:31.673 "reset": true, 00:32:31.673 "nvme_admin": false, 00:32:31.673 "nvme_io": false, 00:32:31.673 "nvme_io_md": false, 00:32:31.673 "write_zeroes": true, 00:32:31.673 "zcopy": false, 00:32:31.673 "get_zone_info": false, 00:32:31.673 "zone_management": false, 00:32:31.673 "zone_append": false, 00:32:31.673 "compare": false, 00:32:31.673 "compare_and_write": false, 00:32:31.673 "abort": false, 00:32:31.673 "seek_hole": true, 00:32:31.673 "seek_data": true, 00:32:31.673 "copy": false, 00:32:31.673 "nvme_iov_md": false 00:32:31.673 }, 00:32:31.673 "driver_specific": { 00:32:31.673 "lvol": { 00:32:31.673 "lvol_store_uuid": "f4116a3c-93a8-4eb4-a96f-7ce0503ef033", 00:32:31.673 "base_bdev": "aio_bdev", 00:32:31.673 "thin_provision": false, 00:32:31.673 "num_allocated_clusters": 38, 00:32:31.673 "snapshot": false, 00:32:31.673 "clone": false, 00:32:31.673 "esnap_clone": false 00:32:31.673 } 00:32:31.673 } 00:32:31.673 } 00:32:31.673 ] 00:32:31.673 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:31.673 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4116a3c-93a8-4eb4-a96f-7ce0503ef033 00:32:31.673 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:31.934 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:31.934 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4116a3c-93a8-4eb4-a96f-7ce0503ef033 00:32:31.934 07:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:31.934 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:31.934 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:32.195 [2024-11-20 07:46:50.282285] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:32.195 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4116a3c-93a8-4eb4-a96f-7ce0503ef033 00:32:32.195 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:32:32.195 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4116a3c-93a8-4eb4-a96f-7ce0503ef033 00:32:32.195 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:32.195 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.195 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:32.195 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.195 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:32.195 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.195 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:32.195 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:32.195 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4116a3c-93a8-4eb4-a96f-7ce0503ef033 00:32:32.455 request: 00:32:32.455 { 00:32:32.455 "uuid": "f4116a3c-93a8-4eb4-a96f-7ce0503ef033", 00:32:32.455 "method": "bdev_lvol_get_lvstores", 00:32:32.455 "req_id": 1 00:32:32.455 } 00:32:32.455 Got JSON-RPC error response 00:32:32.455 response: 00:32:32.455 { 00:32:32.455 "code": -19, 00:32:32.455 "message": "No such device" 00:32:32.455 } 00:32:32.455 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:32:32.455 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:32.455 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:32.455 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:32.455 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:32.716 aio_bdev 00:32:32.716 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7130fab9-15f5-4231-aad5-7e552a393ad6 00:32:32.716 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=7130fab9-15f5-4231-aad5-7e552a393ad6 00:32:32.716 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:32.716 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:32.716 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:32.716 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:32.716 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:32.716 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7130fab9-15f5-4231-aad5-7e552a393ad6 -t 2000 00:32:32.977 [ 00:32:32.977 { 00:32:32.977 "name": "7130fab9-15f5-4231-aad5-7e552a393ad6", 00:32:32.977 "aliases": [ 00:32:32.977 "lvs/lvol" 00:32:32.977 ], 00:32:32.977 "product_name": "Logical Volume", 00:32:32.977 "block_size": 4096, 00:32:32.977 "num_blocks": 38912, 00:32:32.977 "uuid": "7130fab9-15f5-4231-aad5-7e552a393ad6", 00:32:32.977 "assigned_rate_limits": { 00:32:32.977 "rw_ios_per_sec": 0, 00:32:32.977 "rw_mbytes_per_sec": 0, 00:32:32.977 "r_mbytes_per_sec": 0, 00:32:32.977 "w_mbytes_per_sec": 0 00:32:32.977 }, 00:32:32.977 "claimed": false, 00:32:32.977 "zoned": false, 00:32:32.977 "supported_io_types": { 00:32:32.977 "read": true, 00:32:32.977 "write": true, 00:32:32.977 "unmap": true, 00:32:32.977 "flush": false, 00:32:32.977 "reset": true, 00:32:32.977 "nvme_admin": false, 00:32:32.977 "nvme_io": false, 00:32:32.977 "nvme_io_md": false, 00:32:32.977 "write_zeroes": true, 00:32:32.977 "zcopy": false, 00:32:32.977 "get_zone_info": false, 00:32:32.977 "zone_management": false, 00:32:32.977 "zone_append": false, 00:32:32.977 "compare": false, 00:32:32.977 "compare_and_write": false, 00:32:32.977 "abort": false, 00:32:32.977 "seek_hole": true, 00:32:32.977 "seek_data": true, 00:32:32.977 "copy": false, 00:32:32.977 "nvme_iov_md": false 00:32:32.977 }, 00:32:32.977 "driver_specific": { 00:32:32.977 "lvol": { 00:32:32.977 "lvol_store_uuid": "f4116a3c-93a8-4eb4-a96f-7ce0503ef033", 00:32:32.977 "base_bdev": "aio_bdev", 00:32:32.977 "thin_provision": false, 00:32:32.977 "num_allocated_clusters": 38, 00:32:32.977 "snapshot": false, 00:32:32.978 "clone": false, 00:32:32.978 "esnap_clone": false 00:32:32.978 } 00:32:32.978 } 00:32:32.978 } 00:32:32.978 ] 00:32:32.978 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:32.978 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4116a3c-93a8-4eb4-a96f-7ce0503ef033 00:32:32.978 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:33.238 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:33.238 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4116a3c-93a8-4eb4-a96f-7ce0503ef033 00:32:33.238 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:33.238 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:33.238 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7130fab9-15f5-4231-aad5-7e552a393ad6 00:32:33.499 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f4116a3c-93a8-4eb4-a96f-7ce0503ef033 00:32:33.759 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:33.759 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:33.759 00:32:33.759 real 0m17.600s 00:32:33.759 user 0m35.436s 00:32:33.759 sys 0m3.162s 00:32:33.759 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:33.759 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:33.759 ************************************ 00:32:33.759 END TEST lvs_grow_dirty 00:32:33.759 ************************************ 00:32:33.759 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:33.759 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:32:33.759 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:32:33.759 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:32:33.759 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:34.019 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:32:34.019 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:32:34.019 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:32:34.019 07:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:34.020 nvmf_trace.0 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:34.020 rmmod nvme_tcp 00:32:34.020 rmmod nvme_fabrics 00:32:34.020 rmmod nvme_keyring 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3637370 ']' 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3637370 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3637370 ']' 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3637370 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3637370 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3637370' 00:32:34.020 killing process with pid 3637370 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3637370 00:32:34.020 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3637370 00:32:34.280 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:34.280 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:34.280 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:34.280 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:34.280 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:34.280 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:34.280 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:34.280 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:34.280 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:34.280 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.281 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.281 07:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.194 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:36.194 00:32:36.194 real 0m45.145s 00:32:36.194 user 0m53.687s 00:32:36.194 sys 0m11.064s 00:32:36.194 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:36.194 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:36.194 ************************************ 00:32:36.194 END TEST nvmf_lvs_grow 00:32:36.195 ************************************ 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:36.457 ************************************ 00:32:36.457 START TEST nvmf_bdev_io_wait 00:32:36.457 ************************************ 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:36.457 * Looking for test storage... 00:32:36.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:36.457 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:36.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.720 --rc genhtml_branch_coverage=1 00:32:36.720 --rc genhtml_function_coverage=1 00:32:36.720 --rc genhtml_legend=1 00:32:36.720 --rc geninfo_all_blocks=1 00:32:36.720 --rc geninfo_unexecuted_blocks=1 00:32:36.720 00:32:36.720 ' 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:36.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.720 --rc genhtml_branch_coverage=1 00:32:36.720 --rc genhtml_function_coverage=1 00:32:36.720 --rc genhtml_legend=1 00:32:36.720 --rc geninfo_all_blocks=1 00:32:36.720 --rc geninfo_unexecuted_blocks=1 00:32:36.720 00:32:36.720 ' 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:36.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.720 --rc genhtml_branch_coverage=1 00:32:36.720 --rc genhtml_function_coverage=1 00:32:36.720 --rc genhtml_legend=1 00:32:36.720 --rc geninfo_all_blocks=1 00:32:36.720 --rc geninfo_unexecuted_blocks=1 00:32:36.720 00:32:36.720 ' 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:36.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.720 --rc genhtml_branch_coverage=1 00:32:36.720 --rc genhtml_function_coverage=1 00:32:36.720 --rc genhtml_legend=1 00:32:36.720 --rc geninfo_all_blocks=1 00:32:36.720 --rc geninfo_unexecuted_blocks=1 00:32:36.720 00:32:36.720 ' 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:36.720 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:36.721 07:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:44.863 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:44.864 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:44.864 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:44.864 Found net devices under 0000:31:00.0: cvl_0_0 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:44.864 Found net devices under 0000:31:00.1: cvl_0_1 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:44.864 07:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:44.864 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:44.864 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:44.864 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:44.864 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:44.864 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:44.864 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:44.864 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:44.864 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:44.864 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:44.864 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:44.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:44.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:32:44.865 00:32:44.865 --- 10.0.0.2 ping statistics --- 00:32:44.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.865 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:44.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:44.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:32:44.865 00:32:44.865 --- 10.0.0.1 ping statistics --- 00:32:44.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.865 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3642451 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3642451 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3642451 ']' 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:44.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:44.865 07:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:44.865 [2024-11-20 07:47:02.420453] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:44.865 [2024-11-20 07:47:02.421622] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:32:44.865 [2024-11-20 07:47:02.421671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:44.865 [2024-11-20 07:47:02.522140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:44.865 [2024-11-20 07:47:02.577856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:44.865 [2024-11-20 07:47:02.577903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:44.865 [2024-11-20 07:47:02.577913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:44.865 [2024-11-20 07:47:02.577920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:44.865 [2024-11-20 07:47:02.577927] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:44.865 [2024-11-20 07:47:02.580055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.865 [2024-11-20 07:47:02.580215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:44.865 [2024-11-20 07:47:02.580411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.865 [2024-11-20 07:47:02.580410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:44.865 [2024-11-20 07:47:02.580755] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:45.126 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:45.126 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:32:45.126 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:45.126 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:45.126 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.126 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:45.126 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:45.126 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.126 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.126 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.126 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:45.126 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.126 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.388 [2024-11-20 07:47:03.345623] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:45.388 [2024-11-20 07:47:03.346424] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:45.388 [2024-11-20 07:47:03.346481] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:45.388 [2024-11-20 07:47:03.346683] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:45.388 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.388 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:45.388 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.388 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.388 [2024-11-20 07:47:03.357282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:45.388 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.388 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:45.388 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.388 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.388 Malloc0 00:32:45.388 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.389 [2024-11-20 07:47:03.429397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3642504 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3642506 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:45.389 { 00:32:45.389 "params": { 00:32:45.389 "name": "Nvme$subsystem", 00:32:45.389 "trtype": "$TEST_TRANSPORT", 00:32:45.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:45.389 "adrfam": "ipv4", 00:32:45.389 "trsvcid": "$NVMF_PORT", 00:32:45.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:45.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:45.389 "hdgst": ${hdgst:-false}, 00:32:45.389 "ddgst": ${ddgst:-false} 00:32:45.389 }, 00:32:45.389 "method": "bdev_nvme_attach_controller" 00:32:45.389 } 00:32:45.389 EOF 00:32:45.389 )") 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3642509 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:45.389 { 00:32:45.389 "params": { 00:32:45.389 "name": "Nvme$subsystem", 00:32:45.389 "trtype": "$TEST_TRANSPORT", 00:32:45.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:45.389 "adrfam": "ipv4", 00:32:45.389 "trsvcid": "$NVMF_PORT", 00:32:45.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:45.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:45.389 "hdgst": ${hdgst:-false}, 00:32:45.389 "ddgst": ${ddgst:-false} 00:32:45.389 }, 00:32:45.389 "method": "bdev_nvme_attach_controller" 00:32:45.389 } 00:32:45.389 EOF 00:32:45.389 )") 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3642512 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:45.389 { 00:32:45.389 "params": { 00:32:45.389 "name": "Nvme$subsystem", 00:32:45.389 "trtype": "$TEST_TRANSPORT", 00:32:45.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:45.389 "adrfam": "ipv4", 00:32:45.389 "trsvcid": "$NVMF_PORT", 00:32:45.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:45.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:45.389 "hdgst": ${hdgst:-false}, 00:32:45.389 "ddgst": ${ddgst:-false} 00:32:45.389 }, 00:32:45.389 "method": "bdev_nvme_attach_controller" 00:32:45.389 } 00:32:45.389 EOF 00:32:45.389 )") 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:45.389 { 00:32:45.389 "params": { 00:32:45.389 "name": "Nvme$subsystem", 00:32:45.389 "trtype": "$TEST_TRANSPORT", 00:32:45.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:45.389 "adrfam": "ipv4", 00:32:45.389 "trsvcid": "$NVMF_PORT", 00:32:45.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:45.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:45.389 "hdgst": ${hdgst:-false}, 00:32:45.389 "ddgst": ${ddgst:-false} 00:32:45.389 }, 00:32:45.389 "method": "bdev_nvme_attach_controller" 00:32:45.389 } 00:32:45.389 EOF 00:32:45.389 )") 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3642504 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:45.389 "params": { 00:32:45.389 "name": "Nvme1", 00:32:45.389 "trtype": "tcp", 00:32:45.389 "traddr": "10.0.0.2", 00:32:45.389 "adrfam": "ipv4", 00:32:45.389 "trsvcid": "4420", 00:32:45.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:45.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:45.389 "hdgst": false, 00:32:45.389 "ddgst": false 00:32:45.389 }, 00:32:45.389 "method": "bdev_nvme_attach_controller" 00:32:45.389 }' 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:45.389 "params": { 00:32:45.389 "name": "Nvme1", 00:32:45.389 "trtype": "tcp", 00:32:45.389 "traddr": "10.0.0.2", 00:32:45.389 "adrfam": "ipv4", 00:32:45.389 "trsvcid": "4420", 00:32:45.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:45.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:45.389 "hdgst": false, 00:32:45.389 "ddgst": false 00:32:45.389 }, 00:32:45.389 "method": "bdev_nvme_attach_controller" 00:32:45.389 }' 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:45.389 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:45.389 "params": { 00:32:45.389 "name": "Nvme1", 00:32:45.389 "trtype": "tcp", 00:32:45.389 "traddr": "10.0.0.2", 00:32:45.389 "adrfam": "ipv4", 00:32:45.389 "trsvcid": "4420", 00:32:45.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:45.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:45.389 "hdgst": false, 00:32:45.389 "ddgst": false 00:32:45.389 }, 00:32:45.390 "method": "bdev_nvme_attach_controller" 00:32:45.390 }' 00:32:45.390 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:45.390 07:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:45.390 "params": { 00:32:45.390 "name": "Nvme1", 00:32:45.390 "trtype": "tcp", 00:32:45.390 "traddr": "10.0.0.2", 00:32:45.390 "adrfam": "ipv4", 00:32:45.390 "trsvcid": "4420", 00:32:45.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:45.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:45.390 "hdgst": false, 00:32:45.390 "ddgst": false 00:32:45.390 }, 00:32:45.390 "method": "bdev_nvme_attach_controller" 00:32:45.390 }' 00:32:45.390 [2024-11-20 07:47:03.487608] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:32:45.390 [2024-11-20 07:47:03.487669] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:45.390 [2024-11-20 07:47:03.488139] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:32:45.390 [2024-11-20 07:47:03.488198] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:45.390 [2024-11-20 07:47:03.493045] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:32:45.390 [2024-11-20 07:47:03.493104] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:45.390 [2024-11-20 07:47:03.495593] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:32:45.390 [2024-11-20 07:47:03.495679] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:45.651 [2024-11-20 07:47:03.680508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.651 [2024-11-20 07:47:03.720118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:45.651 [2024-11-20 07:47:03.740253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.651 [2024-11-20 07:47:03.778552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:45.651 [2024-11-20 07:47:03.804963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.651 [2024-11-20 07:47:03.843449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:45.912 [2024-11-20 07:47:03.897535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.912 [2024-11-20 07:47:03.937939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:45.912 Running I/O for 1 seconds... 00:32:45.912 Running I/O for 1 seconds... 00:32:45.912 Running I/O for 1 seconds... 00:32:45.912 Running I/O for 1 seconds... 00:32:46.856 6895.00 IOPS, 26.93 MiB/s 00:32:46.856 Latency(us) 00:32:46.856 [2024-11-20T06:47:05.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.856 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:46.856 Nvme1n1 : 1.02 6918.56 27.03 0.00 0.00 18345.85 2293.76 24139.09 00:32:46.856 [2024-11-20T06:47:05.066Z] =================================================================================================================== 00:32:46.856 [2024-11-20T06:47:05.066Z] Total : 6918.56 27.03 0.00 0.00 18345.85 2293.76 24139.09 00:32:46.856 6481.00 IOPS, 25.32 MiB/s [2024-11-20T06:47:05.066Z] 12921.00 IOPS, 50.47 MiB/s 00:32:46.856 Latency(us) 00:32:46.856 [2024-11-20T06:47:05.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.856 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:46.856 Nvme1n1 : 1.01 6565.95 25.65 0.00 0.00 19429.08 5324.80 35170.99 00:32:46.856 [2024-11-20T06:47:05.066Z] =================================================================================================================== 00:32:46.856 [2024-11-20T06:47:05.066Z] Total : 6565.95 25.65 0.00 0.00 19429.08 5324.80 35170.99 00:32:47.117 00:32:47.117 Latency(us) 00:32:47.117 [2024-11-20T06:47:05.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.117 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:47.117 Nvme1n1 : 1.01 12991.25 50.75 0.00 0.00 9823.13 2717.01 15837.87 00:32:47.117 [2024-11-20T06:47:05.327Z] =================================================================================================================== 00:32:47.117 [2024-11-20T06:47:05.327Z] Total : 12991.25 50.75 0.00 0.00 9823.13 2717.01 15837.87 00:32:47.117 187248.00 IOPS, 731.44 MiB/s 00:32:47.117 Latency(us) 00:32:47.117 [2024-11-20T06:47:05.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.117 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:47.117 Nvme1n1 : 1.00 186878.32 729.99 0.00 0.00 681.20 302.08 1966.08 00:32:47.117 [2024-11-20T06:47:05.327Z] =================================================================================================================== 00:32:47.117 [2024-11-20T06:47:05.327Z] Total : 186878.32 729.99 0.00 0.00 681.20 302.08 1966.08 00:32:47.117 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3642506 00:32:47.117 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3642509 00:32:47.117 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3642512 00:32:47.117 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:47.117 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.117 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:47.117 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.117 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:47.117 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:47.117 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:47.117 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:47.117 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:47.118 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:47.118 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.118 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:47.118 rmmod nvme_tcp 00:32:47.118 rmmod nvme_fabrics 00:32:47.118 rmmod nvme_keyring 00:32:47.118 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.118 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:47.118 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:47.118 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3642451 ']' 00:32:47.118 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3642451 00:32:47.118 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3642451 ']' 00:32:47.118 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3642451 00:32:47.118 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:32:47.118 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:47.118 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3642451 00:32:47.379 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:47.379 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:47.379 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3642451' 00:32:47.379 killing process with pid 3642451 00:32:47.379 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3642451 00:32:47.379 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3642451 00:32:47.379 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:47.379 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:47.379 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:47.379 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:47.379 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:47.379 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:47.379 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:47.379 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:47.379 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:47.379 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.379 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.379 07:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.929 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:49.929 00:32:49.929 real 0m13.133s 00:32:49.929 user 0m15.048s 00:32:49.929 sys 0m7.894s 00:32:49.929 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:49.929 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:49.929 ************************************ 00:32:49.929 END TEST nvmf_bdev_io_wait 00:32:49.929 ************************************ 00:32:49.929 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:49.929 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:49.929 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:49.929 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:49.929 ************************************ 00:32:49.929 START TEST nvmf_queue_depth 00:32:49.929 ************************************ 00:32:49.929 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:49.929 * Looking for test storage... 00:32:49.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:49.929 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:49.929 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:49.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.930 --rc genhtml_branch_coverage=1 00:32:49.930 --rc genhtml_function_coverage=1 00:32:49.930 --rc genhtml_legend=1 00:32:49.930 --rc geninfo_all_blocks=1 00:32:49.930 --rc geninfo_unexecuted_blocks=1 00:32:49.930 00:32:49.930 ' 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:49.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.930 --rc genhtml_branch_coverage=1 00:32:49.930 --rc genhtml_function_coverage=1 00:32:49.930 --rc genhtml_legend=1 00:32:49.930 --rc geninfo_all_blocks=1 00:32:49.930 --rc geninfo_unexecuted_blocks=1 00:32:49.930 00:32:49.930 ' 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:49.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.930 --rc genhtml_branch_coverage=1 00:32:49.930 --rc genhtml_function_coverage=1 00:32:49.930 --rc genhtml_legend=1 00:32:49.930 --rc geninfo_all_blocks=1 00:32:49.930 --rc geninfo_unexecuted_blocks=1 00:32:49.930 00:32:49.930 ' 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:49.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.930 --rc genhtml_branch_coverage=1 00:32:49.930 --rc genhtml_function_coverage=1 00:32:49.930 --rc genhtml_legend=1 00:32:49.930 --rc geninfo_all_blocks=1 00:32:49.930 --rc geninfo_unexecuted_blocks=1 00:32:49.930 00:32:49.930 ' 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.930 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:49.931 07:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:58.072 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.072 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:58.073 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:58.073 Found net devices under 0000:31:00.0: cvl_0_0 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:58.073 Found net devices under 0000:31:00.1: cvl_0_1 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:58.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:58.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:32:58.073 00:32:58.073 --- 10.0.0.2 ping statistics --- 00:32:58.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.073 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:58.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:58.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:32:58.073 00:32:58.073 --- 10.0.0.1 ping statistics --- 00:32:58.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.073 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3647215 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3647215 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3647215 ']' 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:58.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:58.073 07:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.073 [2024-11-20 07:47:15.597724] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:58.073 [2024-11-20 07:47:15.599027] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:32:58.073 [2024-11-20 07:47:15.599080] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:58.073 [2024-11-20 07:47:15.701212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.073 [2024-11-20 07:47:15.751438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:58.074 [2024-11-20 07:47:15.751488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:58.074 [2024-11-20 07:47:15.751496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:58.074 [2024-11-20 07:47:15.751504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:58.074 [2024-11-20 07:47:15.751510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:58.074 [2024-11-20 07:47:15.752278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.074 [2024-11-20 07:47:15.832782] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:58.074 [2024-11-20 07:47:15.833061] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.335 [2024-11-20 07:47:16.453130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.335 Malloc0 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.335 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.596 [2024-11-20 07:47:16.541312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.596 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.596 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3647296 00:32:58.596 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:58.596 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:58.596 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3647296 /var/tmp/bdevperf.sock 00:32:58.596 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3647296 ']' 00:32:58.596 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:58.596 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:58.596 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:58.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:58.596 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:58.596 07:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.596 [2024-11-20 07:47:16.607049] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:32:58.596 [2024-11-20 07:47:16.607113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647296 ] 00:32:58.596 [2024-11-20 07:47:16.701493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.596 [2024-11-20 07:47:16.755303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.592 07:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:59.592 07:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:32:59.592 07:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:59.592 07:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.592 07:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:59.592 NVMe0n1 00:32:59.592 07:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.592 07:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:59.592 Running I/O for 10 seconds... 00:33:01.576 8348.00 IOPS, 32.61 MiB/s [2024-11-20T06:47:20.727Z] 8708.50 IOPS, 34.02 MiB/s [2024-11-20T06:47:21.669Z] 9454.33 IOPS, 36.93 MiB/s [2024-11-20T06:47:23.053Z] 10258.50 IOPS, 40.07 MiB/s [2024-11-20T06:47:23.623Z] 10870.60 IOPS, 42.46 MiB/s [2024-11-20T06:47:25.010Z] 11312.83 IOPS, 44.19 MiB/s [2024-11-20T06:47:25.951Z] 11690.71 IOPS, 45.67 MiB/s [2024-11-20T06:47:26.891Z] 11915.75 IOPS, 46.55 MiB/s [2024-11-20T06:47:27.832Z] 12163.11 IOPS, 47.51 MiB/s [2024-11-20T06:47:27.832Z] 12312.60 IOPS, 48.10 MiB/s 00:33:09.622 Latency(us) 00:33:09.623 [2024-11-20T06:47:27.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.623 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:09.623 Verification LBA range: start 0x0 length 0x4000 00:33:09.623 NVMe0n1 : 10.05 12348.78 48.24 0.00 0.00 82643.92 12670.29 77769.39 00:33:09.623 [2024-11-20T06:47:27.833Z] =================================================================================================================== 00:33:09.623 [2024-11-20T06:47:27.833Z] Total : 12348.78 48.24 0.00 0.00 82643.92 12670.29 77769.39 00:33:09.623 { 00:33:09.623 "results": [ 00:33:09.623 { 00:33:09.623 "job": "NVMe0n1", 00:33:09.623 "core_mask": "0x1", 00:33:09.623 "workload": "verify", 00:33:09.623 "status": "finished", 00:33:09.623 "verify_range": { 00:33:09.623 "start": 0, 00:33:09.623 "length": 16384 00:33:09.623 }, 00:33:09.623 "queue_depth": 1024, 00:33:09.623 "io_size": 4096, 00:33:09.623 "runtime": 10.048681, 00:33:09.623 "iops": 12348.784880324094, 00:33:09.623 "mibps": 48.23744093876599, 00:33:09.623 "io_failed": 0, 00:33:09.623 "io_timeout": 0, 00:33:09.623 "avg_latency_us": 82643.91967270803, 00:33:09.623 "min_latency_us": 12670.293333333333, 00:33:09.623 "max_latency_us": 77769.38666666667 00:33:09.623 } 00:33:09.623 ], 00:33:09.623 "core_count": 1 00:33:09.623 } 00:33:09.623 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3647296 00:33:09.623 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3647296 ']' 00:33:09.623 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3647296 00:33:09.623 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:33:09.623 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:09.623 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3647296 00:33:09.623 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:09.623 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:09.623 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3647296' 00:33:09.623 killing process with pid 3647296 00:33:09.623 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3647296 00:33:09.623 Received shutdown signal, test time was about 10.000000 seconds 00:33:09.623 00:33:09.623 Latency(us) 00:33:09.623 [2024-11-20T06:47:27.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.623 [2024-11-20T06:47:27.833Z] =================================================================================================================== 00:33:09.623 [2024-11-20T06:47:27.833Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:09.623 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3647296 00:33:09.883 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:09.883 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:09.883 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:09.883 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:09.883 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:09.883 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:09.883 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:09.883 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:09.883 rmmod nvme_tcp 00:33:09.883 rmmod nvme_fabrics 00:33:09.883 rmmod nvme_keyring 00:33:09.883 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:09.883 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:09.883 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:09.883 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3647215 ']' 00:33:09.883 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3647215 00:33:09.883 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3647215 ']' 00:33:09.883 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3647215 00:33:09.883 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:33:09.883 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:09.884 07:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3647215 00:33:09.884 07:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:09.884 07:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:09.884 07:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3647215' 00:33:09.884 killing process with pid 3647215 00:33:09.884 07:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3647215 00:33:09.884 07:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3647215 00:33:10.144 07:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:10.144 07:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:10.144 07:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:10.144 07:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:10.144 07:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:10.144 07:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:10.144 07:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:10.144 07:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:10.145 07:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:10.145 07:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.145 07:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.145 07:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.057 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:12.057 00:33:12.057 real 0m22.523s 00:33:12.057 user 0m24.528s 00:33:12.057 sys 0m7.582s 00:33:12.057 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:12.057 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.057 ************************************ 00:33:12.057 END TEST nvmf_queue_depth 00:33:12.057 ************************************ 00:33:12.057 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:12.057 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:12.057 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:12.057 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:12.318 ************************************ 00:33:12.318 START TEST nvmf_target_multipath 00:33:12.318 ************************************ 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:12.318 * Looking for test storage... 00:33:12.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:12.318 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:12.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.319 --rc genhtml_branch_coverage=1 00:33:12.319 --rc genhtml_function_coverage=1 00:33:12.319 --rc genhtml_legend=1 00:33:12.319 --rc geninfo_all_blocks=1 00:33:12.319 --rc geninfo_unexecuted_blocks=1 00:33:12.319 00:33:12.319 ' 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:12.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.319 --rc genhtml_branch_coverage=1 00:33:12.319 --rc genhtml_function_coverage=1 00:33:12.319 --rc genhtml_legend=1 00:33:12.319 --rc geninfo_all_blocks=1 00:33:12.319 --rc geninfo_unexecuted_blocks=1 00:33:12.319 00:33:12.319 ' 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:12.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.319 --rc genhtml_branch_coverage=1 00:33:12.319 --rc genhtml_function_coverage=1 00:33:12.319 --rc genhtml_legend=1 00:33:12.319 --rc geninfo_all_blocks=1 00:33:12.319 --rc geninfo_unexecuted_blocks=1 00:33:12.319 00:33:12.319 ' 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:12.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.319 --rc genhtml_branch_coverage=1 00:33:12.319 --rc genhtml_function_coverage=1 00:33:12.319 --rc genhtml_legend=1 00:33:12.319 --rc geninfo_all_blocks=1 00:33:12.319 --rc geninfo_unexecuted_blocks=1 00:33:12.319 00:33:12.319 ' 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:12.319 07:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:20.460 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:20.460 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:20.460 Found net devices under 0000:31:00.0: cvl_0_0 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:20.460 Found net devices under 0000:31:00.1: cvl_0_1 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:20.460 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:20.461 07:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:20.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:20.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:33:20.461 00:33:20.461 --- 10.0.0.2 ping statistics --- 00:33:20.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.461 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:20.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:20.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:33:20.461 00:33:20.461 --- 10.0.0.1 ping statistics --- 00:33:20.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.461 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:20.461 only one NIC for nvmf test 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.461 rmmod nvme_tcp 00:33:20.461 rmmod nvme_fabrics 00:33:20.461 rmmod nvme_keyring 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.461 07:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.372 00:33:22.372 real 0m10.062s 00:33:22.372 user 0m2.184s 00:33:22.372 sys 0m5.824s 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:22.372 ************************************ 00:33:22.372 END TEST nvmf_target_multipath 00:33:22.372 ************************************ 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:22.372 ************************************ 00:33:22.372 START TEST nvmf_zcopy 00:33:22.372 ************************************ 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:22.372 * Looking for test storage... 00:33:22.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:33:22.372 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:22.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.633 --rc genhtml_branch_coverage=1 00:33:22.633 --rc genhtml_function_coverage=1 00:33:22.633 --rc genhtml_legend=1 00:33:22.633 --rc geninfo_all_blocks=1 00:33:22.633 --rc geninfo_unexecuted_blocks=1 00:33:22.633 00:33:22.633 ' 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:22.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.633 --rc genhtml_branch_coverage=1 00:33:22.633 --rc genhtml_function_coverage=1 00:33:22.633 --rc genhtml_legend=1 00:33:22.633 --rc geninfo_all_blocks=1 00:33:22.633 --rc geninfo_unexecuted_blocks=1 00:33:22.633 00:33:22.633 ' 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:22.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.633 --rc genhtml_branch_coverage=1 00:33:22.633 --rc genhtml_function_coverage=1 00:33:22.633 --rc genhtml_legend=1 00:33:22.633 --rc geninfo_all_blocks=1 00:33:22.633 --rc geninfo_unexecuted_blocks=1 00:33:22.633 00:33:22.633 ' 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:22.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.633 --rc genhtml_branch_coverage=1 00:33:22.633 --rc genhtml_function_coverage=1 00:33:22.633 --rc genhtml_legend=1 00:33:22.633 --rc geninfo_all_blocks=1 00:33:22.633 --rc geninfo_unexecuted_blocks=1 00:33:22.633 00:33:22.633 ' 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.633 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:22.634 07:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:30.779 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:30.779 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:30.779 Found net devices under 0000:31:00.0: cvl_0_0 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.779 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:30.780 Found net devices under 0000:31:00.1: cvl_0_1 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:30.780 07:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:30.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:33:30.780 00:33:30.780 --- 10.0.0.2 ping statistics --- 00:33:30.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.780 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:30.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:33:30.780 00:33:30.780 --- 10.0.0.1 ping statistics --- 00:33:30.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.780 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3657964 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3657964 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3657964 ']' 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:30.780 07:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.780 [2024-11-20 07:47:48.350321] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:30.780 [2024-11-20 07:47:48.351484] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:33:30.780 [2024-11-20 07:47:48.351536] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.780 [2024-11-20 07:47:48.454175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.780 [2024-11-20 07:47:48.504078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.780 [2024-11-20 07:47:48.504134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:30.780 [2024-11-20 07:47:48.504143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:30.780 [2024-11-20 07:47:48.504150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:30.780 [2024-11-20 07:47:48.504156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.780 [2024-11-20 07:47:48.504940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.780 [2024-11-20 07:47:48.584825] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:30.780 [2024-11-20 07:47:48.585099] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:31.041 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:31.041 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:33:31.041 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:31.041 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:31.041 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.041 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:31.041 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:31.041 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:31.041 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.041 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.041 [2024-11-20 07:47:49.237811] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:31.041 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.301 [2024-11-20 07:47:49.266138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.301 malloc0 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:31.301 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:31.301 { 00:33:31.301 "params": { 00:33:31.301 "name": "Nvme$subsystem", 00:33:31.301 "trtype": "$TEST_TRANSPORT", 00:33:31.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:31.301 "adrfam": "ipv4", 00:33:31.301 "trsvcid": "$NVMF_PORT", 00:33:31.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:31.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:31.302 "hdgst": ${hdgst:-false}, 00:33:31.302 "ddgst": ${ddgst:-false} 00:33:31.302 }, 00:33:31.302 "method": "bdev_nvme_attach_controller" 00:33:31.302 } 00:33:31.302 EOF 00:33:31.302 )") 00:33:31.302 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:31.302 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:31.302 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:31.302 07:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:31.302 "params": { 00:33:31.302 "name": "Nvme1", 00:33:31.302 "trtype": "tcp", 00:33:31.302 "traddr": "10.0.0.2", 00:33:31.302 "adrfam": "ipv4", 00:33:31.302 "trsvcid": "4420", 00:33:31.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:31.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:31.302 "hdgst": false, 00:33:31.302 "ddgst": false 00:33:31.302 }, 00:33:31.302 "method": "bdev_nvme_attach_controller" 00:33:31.302 }' 00:33:31.302 [2024-11-20 07:47:49.379646] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:33:31.302 [2024-11-20 07:47:49.379707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658001 ] 00:33:31.302 [2024-11-20 07:47:49.474999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.563 [2024-11-20 07:47:49.527616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.563 Running I/O for 10 seconds... 00:33:33.890 6357.00 IOPS, 49.66 MiB/s [2024-11-20T06:47:53.042Z] 6516.00 IOPS, 50.91 MiB/s [2024-11-20T06:47:53.984Z] 6534.67 IOPS, 51.05 MiB/s [2024-11-20T06:47:54.925Z] 6556.75 IOPS, 51.22 MiB/s [2024-11-20T06:47:55.867Z] 6970.00 IOPS, 54.45 MiB/s [2024-11-20T06:47:56.809Z] 7415.00 IOPS, 57.93 MiB/s [2024-11-20T06:47:57.751Z] 7734.00 IOPS, 60.42 MiB/s [2024-11-20T06:47:59.138Z] 7972.38 IOPS, 62.28 MiB/s [2024-11-20T06:48:00.077Z] 8160.00 IOPS, 63.75 MiB/s [2024-11-20T06:48:00.077Z] 8307.80 IOPS, 64.90 MiB/s 00:33:41.867 Latency(us) 00:33:41.867 [2024-11-20T06:48:00.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.867 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:41.867 Verification LBA range: start 0x0 length 0x1000 00:33:41.867 Nvme1n1 : 10.01 8313.30 64.95 0.00 0.00 15351.48 781.65 28180.48 00:33:41.867 [2024-11-20T06:48:00.077Z] =================================================================================================================== 00:33:41.867 [2024-11-20T06:48:00.077Z] Total : 8313.30 64.95 0.00 0.00 15351.48 781.65 28180.48 00:33:41.867 07:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3660002 00:33:41.867 07:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:41.867 07:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:41.867 07:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:41.867 07:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:41.867 07:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:41.867 07:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:41.867 07:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:41.868 07:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:41.868 { 00:33:41.868 "params": { 00:33:41.868 "name": "Nvme$subsystem", 00:33:41.868 "trtype": "$TEST_TRANSPORT", 00:33:41.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:41.868 "adrfam": "ipv4", 00:33:41.868 "trsvcid": "$NVMF_PORT", 00:33:41.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:41.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:41.868 "hdgst": ${hdgst:-false}, 00:33:41.868 "ddgst": ${ddgst:-false} 00:33:41.868 }, 00:33:41.868 "method": "bdev_nvme_attach_controller" 00:33:41.868 } 00:33:41.868 EOF 00:33:41.868 )") 00:33:41.868 [2024-11-20 07:47:59.833342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:47:59.833370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 07:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:41.868 07:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:41.868 07:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:41.868 07:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:41.868 "params": { 00:33:41.868 "name": "Nvme1", 00:33:41.868 "trtype": "tcp", 00:33:41.868 "traddr": "10.0.0.2", 00:33:41.868 "adrfam": "ipv4", 00:33:41.868 "trsvcid": "4420", 00:33:41.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:41.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:41.868 "hdgst": false, 00:33:41.868 "ddgst": false 00:33:41.868 }, 00:33:41.868 "method": "bdev_nvme_attach_controller" 00:33:41.868 }' 00:33:41.868 [2024-11-20 07:47:59.845310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:47:59.845319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:47:59.857308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:47:59.857314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:47:59.869308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:47:59.869316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:47:59.877835] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:33:41.868 [2024-11-20 07:47:59.877881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660002 ] 00:33:41.868 [2024-11-20 07:47:59.881307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:47:59.881315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:47:59.893307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:47:59.893315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:47:59.905308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:47:59.905315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:47:59.917307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:47:59.917315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:47:59.929307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:47:59.929314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:47:59.941308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:47:59.941316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:47:59.953307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:47:59.953315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:47:59.959375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.868 [2024-11-20 07:47:59.965308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:47:59.965316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:47:59.977308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:47:59.977318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:47:59.988790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.868 [2024-11-20 07:47:59.989307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:47:59.989316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:48:00.001310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:48:00.001319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:48:00.013412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:48:00.013447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:48:00.025312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:48:00.025322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:48:00.037308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:48:00.037317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:48:00.049308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:48:00.049318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.868 [2024-11-20 07:48:00.061316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.868 [2024-11-20 07:48:00.061332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.130 [2024-11-20 07:48:00.073310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.130 [2024-11-20 07:48:00.073320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.130 [2024-11-20 07:48:00.085309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.130 [2024-11-20 07:48:00.085320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.130 [2024-11-20 07:48:00.097310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.130 [2024-11-20 07:48:00.097320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.130 [2024-11-20 07:48:00.109308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.130 [2024-11-20 07:48:00.109316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.130 [2024-11-20 07:48:00.121308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.130 [2024-11-20 07:48:00.121316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.130 [2024-11-20 07:48:00.133309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.130 [2024-11-20 07:48:00.133318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.130 [2024-11-20 07:48:00.145309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.130 [2024-11-20 07:48:00.145318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.130 [2024-11-20 07:48:00.157307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.130 [2024-11-20 07:48:00.157316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.130 [2024-11-20 07:48:00.169307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.130 [2024-11-20 07:48:00.169314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.130 [2024-11-20 07:48:00.181308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.130 [2024-11-20 07:48:00.181316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.130 [2024-11-20 07:48:00.193307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.130 [2024-11-20 07:48:00.193316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.131 [2024-11-20 07:48:00.205307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.131 [2024-11-20 07:48:00.205314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.131 [2024-11-20 07:48:00.217307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.131 [2024-11-20 07:48:00.217315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.131 [2024-11-20 07:48:00.229308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.131 [2024-11-20 07:48:00.229318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.131 [2024-11-20 07:48:00.241307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.131 [2024-11-20 07:48:00.241315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.131 [2024-11-20 07:48:00.253307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.131 [2024-11-20 07:48:00.253314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.131 [2024-11-20 07:48:00.265307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.131 [2024-11-20 07:48:00.265314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.131 [2024-11-20 07:48:00.277429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.131 [2024-11-20 07:48:00.277441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.131 [2024-11-20 07:48:00.289312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.131 [2024-11-20 07:48:00.289325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.131 Running I/O for 5 seconds... 00:33:42.131 [2024-11-20 07:48:00.304122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.131 [2024-11-20 07:48:00.304138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.131 [2024-11-20 07:48:00.317894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.131 [2024-11-20 07:48:00.317909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.131 [2024-11-20 07:48:00.332475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.131 [2024-11-20 07:48:00.332490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.392 [2024-11-20 07:48:00.345629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.392 [2024-11-20 07:48:00.345644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.392 [2024-11-20 07:48:00.361076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.392 [2024-11-20 07:48:00.361092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.392 [2024-11-20 07:48:00.374064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.392 [2024-11-20 07:48:00.374078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.392 [2024-11-20 07:48:00.388377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.392 [2024-11-20 07:48:00.388392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.392 [2024-11-20 07:48:00.401440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.392 [2024-11-20 07:48:00.401455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.392 [2024-11-20 07:48:00.414327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.392 [2024-11-20 07:48:00.414342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.392 [2024-11-20 07:48:00.428418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.392 [2024-11-20 07:48:00.428433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.392 [2024-11-20 07:48:00.441242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.392 [2024-11-20 07:48:00.441258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.392 [2024-11-20 07:48:00.454094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.392 [2024-11-20 07:48:00.454109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.392 [2024-11-20 07:48:00.468554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.392 [2024-11-20 07:48:00.468569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.392 [2024-11-20 07:48:00.481442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.392 [2024-11-20 07:48:00.481457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.392 [2024-11-20 07:48:00.494197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.392 [2024-11-20 07:48:00.494212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.392 [2024-11-20 07:48:00.508113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.392 [2024-11-20 07:48:00.508128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.393 [2024-11-20 07:48:00.520795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.393 [2024-11-20 07:48:00.520809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.393 [2024-11-20 07:48:00.534234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.393 [2024-11-20 07:48:00.534248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.393 [2024-11-20 07:48:00.548120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.393 [2024-11-20 07:48:00.548134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.393 [2024-11-20 07:48:00.560998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.393 [2024-11-20 07:48:00.561013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.393 [2024-11-20 07:48:00.573812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.393 [2024-11-20 07:48:00.573826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.393 [2024-11-20 07:48:00.588731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.393 [2024-11-20 07:48:00.588750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.602053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.602068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.616465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.616479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.629636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.629651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.644763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.644778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.657995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.658008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.672663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.672678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.685771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.685785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.700839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.700854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.714113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.714127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.728301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.728317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.741475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.741495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.754247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.754262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.768632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.768647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.781868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.781882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.796339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.796353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.809154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.809168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.822147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.822162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.836656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.836671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.654 [2024-11-20 07:48:00.849961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.654 [2024-11-20 07:48:00.849975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:00.865060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:00.865074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:00.878573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:00.878588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:00.892632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:00.892647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:00.905698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:00.905713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:00.920354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:00.920369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:00.933496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:00.933512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:00.946086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:00.946100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:00.960912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:00.960926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:00.974174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:00.974188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:00.989138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:00.989152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:01.002211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:01.002233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:01.016466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:01.016481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:01.029008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:01.029023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:01.042346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:01.042361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:01.057000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:01.057014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:01.070001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:01.070016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:01.084284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:01.084298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:01.097732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:01.097750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.916 [2024-11-20 07:48:01.112048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.916 [2024-11-20 07:48:01.112062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.124927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.124942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.138083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.138097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.152806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.152821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.165691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.165705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.180767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.180781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.194068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.194083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.208410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.208425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.221603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.221617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.236422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.236436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.249309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.249323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.262111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.262130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.276325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.276340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.289211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.289225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 18949.00 IOPS, 148.04 MiB/s [2024-11-20T06:48:01.387Z] [2024-11-20 07:48:01.302433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.302447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.316764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.316779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.329719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.329733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.344564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.344579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.357225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.357239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.177 [2024-11-20 07:48:01.370247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.177 [2024-11-20 07:48:01.370262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.384523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.384538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.397530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.397544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.410472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.410486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.424748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.424762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.437763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.437777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.452344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.452358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.465215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.465229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.478170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.478184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.492679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.492693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.505831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.505844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.520654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.520669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.533657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.533671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.548910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.548924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.561683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.561696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.577001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.577016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.590112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.590126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.604661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.604676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.617798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.617812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.439 [2024-11-20 07:48:01.632528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.439 [2024-11-20 07:48:01.632543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.701 [2024-11-20 07:48:01.645434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.701 [2024-11-20 07:48:01.645449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.701 [2024-11-20 07:48:01.658703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.701 [2024-11-20 07:48:01.658716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.701 [2024-11-20 07:48:01.672616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.701 [2024-11-20 07:48:01.672630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.701 [2024-11-20 07:48:01.685750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.701 [2024-11-20 07:48:01.685764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.701 [2024-11-20 07:48:01.700171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.701 [2024-11-20 07:48:01.700186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.701 [2024-11-20 07:48:01.713015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.701 [2024-11-20 07:48:01.713029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.701 [2024-11-20 07:48:01.726650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.701 [2024-11-20 07:48:01.726664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.701 [2024-11-20 07:48:01.740431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.701 [2024-11-20 07:48:01.740446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.701 [2024-11-20 07:48:01.753772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.702 [2024-11-20 07:48:01.753785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.702 [2024-11-20 07:48:01.768105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.702 [2024-11-20 07:48:01.768119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.702 [2024-11-20 07:48:01.781286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.702 [2024-11-20 07:48:01.781300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.702 [2024-11-20 07:48:01.794119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.702 [2024-11-20 07:48:01.794133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.702 [2024-11-20 07:48:01.808714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.702 [2024-11-20 07:48:01.808728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.702 [2024-11-20 07:48:01.821525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.702 [2024-11-20 07:48:01.821540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.702 [2024-11-20 07:48:01.834303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.702 [2024-11-20 07:48:01.834317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.702 [2024-11-20 07:48:01.848408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.702 [2024-11-20 07:48:01.848423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.702 [2024-11-20 07:48:01.861461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.702 [2024-11-20 07:48:01.861476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.702 [2024-11-20 07:48:01.873904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.702 [2024-11-20 07:48:01.873918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.702 [2024-11-20 07:48:01.888224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.702 [2024-11-20 07:48:01.888239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.702 [2024-11-20 07:48:01.901032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.702 [2024-11-20 07:48:01.901047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.963 [2024-11-20 07:48:01.914151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:01.914166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:01.928390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:01.928405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:01.941390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:01.941405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:01.954300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:01.954315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:01.968446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:01.968460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:01.981593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:01.981607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:01.996459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:01.996474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:02.009592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:02.009606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:02.024242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:02.024256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:02.037082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:02.037096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:02.049418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:02.049433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:02.062682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:02.062696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:02.076833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:02.076847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:02.089937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:02.089951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:02.103936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:02.103951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:02.116895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:02.116910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:02.130148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:02.130162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:02.144864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:02.144879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.964 [2024-11-20 07:48:02.157890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.964 [2024-11-20 07:48:02.157904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.172515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.172530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.185606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.185620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.200429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.200444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.213459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.213474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.226557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.226571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.240542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.240557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.253485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.253499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.266142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.266156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.280388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.280407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.293693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.293708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 19021.50 IOPS, 148.61 MiB/s [2024-11-20T06:48:02.435Z] [2024-11-20 07:48:02.308480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.308495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.321751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.321765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.336460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.336475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.349607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.349621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.364723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.364737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.377911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.377925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.392300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.392314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.405303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.405318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.225 [2024-11-20 07:48:02.418211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.225 [2024-11-20 07:48:02.418225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.432706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.432721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.445839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.445853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.460717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.460731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.473686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.473700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.487850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.487864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.500837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.500851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.513336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.513351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.526510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.526524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.540422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.540441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.553803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.553818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.568904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.568919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.581906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.581920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.596627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.596642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.609526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.609541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.622387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.622401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.636339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.636354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.649543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.649558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.662298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.662312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.486 [2024-11-20 07:48:02.676564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.486 [2024-11-20 07:48:02.676578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.487 [2024-11-20 07:48:02.689720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.487 [2024-11-20 07:48:02.689734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.704603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.704618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.717618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.717632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.732877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.732891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.745889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.745902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.760635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.760649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.773630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.773644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.788228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.788241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.801387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.801405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.813988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.814002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.828420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.828434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.841412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.841427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.854327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.854341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.868731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.868751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.882162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.882176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.896508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.896523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.909393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.909407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.921875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.921889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.936981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.936995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.747 [2024-11-20 07:48:02.950046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.747 [2024-11-20 07:48:02.950060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.008 [2024-11-20 07:48:02.964029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:02.964044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:02.976809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:02.976824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:02.989634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:02.989649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:03.004553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:03.004568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:03.017456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:03.017471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:03.030423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:03.030437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:03.045007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:03.045021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:03.058246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:03.058261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:03.072294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:03.072309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:03.085298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:03.085311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:03.098276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:03.098290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:03.112352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:03.112366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:03.125595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:03.125609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:03.140178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:03.140193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:03.153361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:03.153375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:03.165969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:03.165983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:03.180558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:03.180573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:03.193850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:03.193864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.009 [2024-11-20 07:48:03.208382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.009 [2024-11-20 07:48:03.208397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.221424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.221439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.234286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.234301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.249045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.249060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.262162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.262175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.276753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.276767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.289988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.290001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 19033.67 IOPS, 148.70 MiB/s [2024-11-20T06:48:03.480Z] [2024-11-20 07:48:03.304532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.304546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.318058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.318072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.332302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.332316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.344985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.344999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.357918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.357932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.372259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.372273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.385345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.385359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.398116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.398129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.412164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.412178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.425552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.425566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.437921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.437935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.452408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.452423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.270 [2024-11-20 07:48:03.465241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.270 [2024-11-20 07:48:03.465256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.531 [2024-11-20 07:48:03.478597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.531 [2024-11-20 07:48:03.478612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.531 [2024-11-20 07:48:03.492499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.531 [2024-11-20 07:48:03.492513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.531 [2024-11-20 07:48:03.505697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.531 [2024-11-20 07:48:03.505711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.532 [2024-11-20 07:48:03.520663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.532 [2024-11-20 07:48:03.520677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.532 [2024-11-20 07:48:03.533616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.532 [2024-11-20 07:48:03.533630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.532 [2024-11-20 07:48:03.548124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.532 [2024-11-20 07:48:03.548138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.532 [2024-11-20 07:48:03.560855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.532 [2024-11-20 07:48:03.560870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.532 [2024-11-20 07:48:03.573336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.532 [2024-11-20 07:48:03.573350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.532 [2024-11-20 07:48:03.586379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.532 [2024-11-20 07:48:03.586393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.532 [2024-11-20 07:48:03.600240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.532 [2024-11-20 07:48:03.600254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.532 [2024-11-20 07:48:03.613225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.532 [2024-11-20 07:48:03.613239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.532 [2024-11-20 07:48:03.625850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.532 [2024-11-20 07:48:03.625863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.532 [2024-11-20 07:48:03.640357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.532 [2024-11-20 07:48:03.640372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.532 [2024-11-20 07:48:03.653468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.532 [2024-11-20 07:48:03.653483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.532 [2024-11-20 07:48:03.667017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.532 [2024-11-20 07:48:03.667032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.532 [2024-11-20 07:48:03.680960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.532 [2024-11-20 07:48:03.680975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.532 [2024-11-20 07:48:03.693616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.532 [2024-11-20 07:48:03.693630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.532 [2024-11-20 07:48:03.708641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.532 [2024-11-20 07:48:03.708655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.532 [2024-11-20 07:48:03.721848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.532 [2024-11-20 07:48:03.721861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.532 [2024-11-20 07:48:03.736767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.532 [2024-11-20 07:48:03.736783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.793 [2024-11-20 07:48:03.750366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.793 [2024-11-20 07:48:03.750380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.793 [2024-11-20 07:48:03.764805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.793 [2024-11-20 07:48:03.764819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.793 [2024-11-20 07:48:03.777744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.793 [2024-11-20 07:48:03.777762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.793 [2024-11-20 07:48:03.792414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.793 [2024-11-20 07:48:03.792429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.793 [2024-11-20 07:48:03.805215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.793 [2024-11-20 07:48:03.805230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.793 [2024-11-20 07:48:03.818236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.793 [2024-11-20 07:48:03.818255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.793 [2024-11-20 07:48:03.833128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.793 [2024-11-20 07:48:03.833142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.793 [2024-11-20 07:48:03.846170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.793 [2024-11-20 07:48:03.846184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.793 [2024-11-20 07:48:03.860705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.793 [2024-11-20 07:48:03.860720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.793 [2024-11-20 07:48:03.873453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.793 [2024-11-20 07:48:03.873467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.793 [2024-11-20 07:48:03.885665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.793 [2024-11-20 07:48:03.885679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.793 [2024-11-20 07:48:03.900811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.793 [2024-11-20 07:48:03.900825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.793 [2024-11-20 07:48:03.913741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.793 [2024-11-20 07:48:03.913759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.793 [2024-11-20 07:48:03.928440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.793 [2024-11-20 07:48:03.928455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.793 [2024-11-20 07:48:03.941393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.793 [2024-11-20 07:48:03.941408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.794 [2024-11-20 07:48:03.954029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.794 [2024-11-20 07:48:03.954043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.794 [2024-11-20 07:48:03.968943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.794 [2024-11-20 07:48:03.968958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.794 [2024-11-20 07:48:03.982023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.794 [2024-11-20 07:48:03.982037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.794 [2024-11-20 07:48:03.996509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.794 [2024-11-20 07:48:03.996524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.009449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.009464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.022741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.022760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.036615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.036629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.049976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.049990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.064492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.064507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.077687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.077706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.092861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.092876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.106108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.106122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.120459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.120474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.133593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.133607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.148847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.148862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.161985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.161999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.176378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.176393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.189538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.189553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.202336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.202350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.216372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.216386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.229771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.229785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.244395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.244410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.055 [2024-11-20 07:48:04.257782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.055 [2024-11-20 07:48:04.257796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.272771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.272786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.285814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.285828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.300577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.300592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 19022.00 IOPS, 148.61 MiB/s [2024-11-20T06:48:04.526Z] [2024-11-20 07:48:04.314035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.314049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.328508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.328523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.341634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.341652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.356096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.356111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.369099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.369114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.381948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.381962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.396649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.396664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.409603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.409617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.424160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.424175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.437443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.437457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.450315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.450329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.464523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.464538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.477955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.477970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.492749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.492764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.505616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.505629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.316 [2024-11-20 07:48:04.519851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.316 [2024-11-20 07:48:04.519865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.576 [2024-11-20 07:48:04.533149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.576 [2024-11-20 07:48:04.533164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.576 [2024-11-20 07:48:04.545864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.576 [2024-11-20 07:48:04.545877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.576 [2024-11-20 07:48:04.560195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.576 [2024-11-20 07:48:04.560209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.576 [2024-11-20 07:48:04.573385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.576 [2024-11-20 07:48:04.573399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.576 [2024-11-20 07:48:04.586265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.576 [2024-11-20 07:48:04.586279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.576 [2024-11-20 07:48:04.600311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.576 [2024-11-20 07:48:04.600326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.576 [2024-11-20 07:48:04.613630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.577 [2024-11-20 07:48:04.613644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.577 [2024-11-20 07:48:04.628506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.577 [2024-11-20 07:48:04.628520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.577 [2024-11-20 07:48:04.641694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.577 [2024-11-20 07:48:04.641708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.577 [2024-11-20 07:48:04.656578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.577 [2024-11-20 07:48:04.656592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.577 [2024-11-20 07:48:04.669311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.577 [2024-11-20 07:48:04.669325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.577 [2024-11-20 07:48:04.682255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.577 [2024-11-20 07:48:04.682269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.577 [2024-11-20 07:48:04.696474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.577 [2024-11-20 07:48:04.696488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.577 [2024-11-20 07:48:04.709433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.577 [2024-11-20 07:48:04.709447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.577 [2024-11-20 07:48:04.722261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.577 [2024-11-20 07:48:04.722275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.577 [2024-11-20 07:48:04.736375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.577 [2024-11-20 07:48:04.736389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.577 [2024-11-20 07:48:04.749330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.577 [2024-11-20 07:48:04.749344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.577 [2024-11-20 07:48:04.762026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.577 [2024-11-20 07:48:04.762040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.577 [2024-11-20 07:48:04.776366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.577 [2024-11-20 07:48:04.776381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:04.789494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:04.789508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:04.802899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:04.802913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:04.817006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:04.817021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:04.829734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:04.829752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:04.844264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:04.844279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:04.857373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:04.857388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:04.870008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:04.870022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:04.884146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:04.884160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:04.897259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:04.897273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:04.910132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:04.910146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:04.924624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:04.924638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:04.937358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:04.937372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:04.949543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:04.949557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:04.962562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:04.962576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:04.976394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:04.976408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:04.989538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:04.989552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:05.002343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:05.002356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:05.016210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:05.016225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:05.028992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:05.029007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.837 [2024-11-20 07:48:05.041940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.837 [2024-11-20 07:48:05.041954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.055942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.055957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.068968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.068982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.081752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.081765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.097215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.097230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.110094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.110108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.124501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.124515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.137438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.137453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.149840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.149854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.164180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.164194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.177146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.177161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.190045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.190060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.204698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.204712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.217774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.217788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.232253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.232268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.245294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.245309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.258346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.258360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.272350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.272364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.285623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.285637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.097 [2024-11-20 07:48:05.300321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.097 [2024-11-20 07:48:05.300336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.358 19049.00 IOPS, 148.82 MiB/s [2024-11-20T06:48:05.568Z] [2024-11-20 07:48:05.312240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.358 [2024-11-20 07:48:05.312255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.358 00:33:47.358 Latency(us) 00:33:47.358 [2024-11-20T06:48:05.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.358 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:47.358 Nvme1n1 : 5.01 19045.45 148.79 0.00 0.00 6713.82 2143.57 11304.96 00:33:47.358 [2024-11-20T06:48:05.568Z] =================================================================================================================== 00:33:47.358 [2024-11-20T06:48:05.568Z] Total : 19045.45 148.79 0.00 0.00 6713.82 2143.57 11304.96 00:33:47.358 [2024-11-20 07:48:05.321311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.358 [2024-11-20 07:48:05.321324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.358 [2024-11-20 07:48:05.333317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.358 [2024-11-20 07:48:05.333330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.358 [2024-11-20 07:48:05.345314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.358 [2024-11-20 07:48:05.345325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.358 [2024-11-20 07:48:05.357315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.358 [2024-11-20 07:48:05.357327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.358 [2024-11-20 07:48:05.369309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.358 [2024-11-20 07:48:05.369318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.358 [2024-11-20 07:48:05.381309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.358 [2024-11-20 07:48:05.381318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.358 [2024-11-20 07:48:05.393309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.358 [2024-11-20 07:48:05.393318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.358 [2024-11-20 07:48:05.405307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.358 [2024-11-20 07:48:05.405316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.358 [2024-11-20 07:48:05.417307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:47.358 [2024-11-20 07:48:05.417315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:47.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3660002) - No such process 00:33:47.358 07:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3660002 00:33:47.358 07:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:47.358 07:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.358 07:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.358 07:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.358 07:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:47.358 07:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.358 07:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.358 delay0 00:33:47.358 07:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.358 07:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:47.358 07:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.358 07:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.358 07:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.358 07:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:47.359 [2024-11-20 07:48:05.548331] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:53.940 [2024-11-20 07:48:11.874908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159fbb0 is same with the state(6) to be set 00:33:53.940 Initializing NVMe Controllers 00:33:53.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:53.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:53.940 Initialization complete. Launching workers. 00:33:53.940 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 290, failed: 11800 00:33:53.940 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12010, failed to submit 80 00:33:53.940 success 11903, unsuccessful 107, failed 0 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:53.940 rmmod nvme_tcp 00:33:53.940 rmmod nvme_fabrics 00:33:53.940 rmmod nvme_keyring 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3657964 ']' 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3657964 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3657964 ']' 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3657964 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:53.940 07:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3657964 00:33:53.940 07:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:53.941 07:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:53.941 07:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3657964' 00:33:53.941 killing process with pid 3657964 00:33:53.941 07:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3657964 00:33:53.941 07:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3657964 00:33:53.941 07:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:53.941 07:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:53.941 07:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:53.941 07:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:53.941 07:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:53.941 07:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:53.941 07:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:53.941 07:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:53.941 07:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:53.941 07:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:53.941 07:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:53.941 07:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:56.550 00:33:56.550 real 0m33.785s 00:33:56.550 user 0m42.708s 00:33:56.550 sys 0m12.353s 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:56.550 ************************************ 00:33:56.550 END TEST nvmf_zcopy 00:33:56.550 ************************************ 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:56.550 ************************************ 00:33:56.550 START TEST nvmf_nmic 00:33:56.550 ************************************ 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:56.550 * Looking for test storage... 00:33:56.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:56.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.550 --rc genhtml_branch_coverage=1 00:33:56.550 --rc genhtml_function_coverage=1 00:33:56.550 --rc genhtml_legend=1 00:33:56.550 --rc geninfo_all_blocks=1 00:33:56.550 --rc geninfo_unexecuted_blocks=1 00:33:56.550 00:33:56.550 ' 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:56.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.550 --rc genhtml_branch_coverage=1 00:33:56.550 --rc genhtml_function_coverage=1 00:33:56.550 --rc genhtml_legend=1 00:33:56.550 --rc geninfo_all_blocks=1 00:33:56.550 --rc geninfo_unexecuted_blocks=1 00:33:56.550 00:33:56.550 ' 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:56.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.550 --rc genhtml_branch_coverage=1 00:33:56.550 --rc genhtml_function_coverage=1 00:33:56.550 --rc genhtml_legend=1 00:33:56.550 --rc geninfo_all_blocks=1 00:33:56.550 --rc geninfo_unexecuted_blocks=1 00:33:56.550 00:33:56.550 ' 00:33:56.550 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:56.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.550 --rc genhtml_branch_coverage=1 00:33:56.550 --rc genhtml_function_coverage=1 00:33:56.550 --rc genhtml_legend=1 00:33:56.550 --rc geninfo_all_blocks=1 00:33:56.550 --rc geninfo_unexecuted_blocks=1 00:33:56.550 00:33:56.550 ' 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:56.551 07:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:04.752 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:04.752 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:04.752 Found net devices under 0000:31:00.0: cvl_0_0 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:04.752 Found net devices under 0000:31:00.1: cvl_0_1 00:34:04.752 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:04.753 07:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:04.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:04.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.711 ms 00:34:04.753 00:34:04.753 --- 10.0.0.2 ping statistics --- 00:34:04.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.753 rtt min/avg/max/mdev = 0.711/0.711/0.711/0.000 ms 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:04.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:04.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:34:04.753 00:34:04.753 --- 10.0.0.1 ping statistics --- 00:34:04.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.753 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3667095 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3667095 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3667095 ']' 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:04.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:04.753 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:04.753 [2024-11-20 07:48:22.155451] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:04.753 [2024-11-20 07:48:22.156626] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:34:04.753 [2024-11-20 07:48:22.156679] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:04.753 [2024-11-20 07:48:22.265172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:04.753 [2024-11-20 07:48:22.320556] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:04.753 [2024-11-20 07:48:22.320613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:04.753 [2024-11-20 07:48:22.320624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:04.753 [2024-11-20 07:48:22.320632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:04.753 [2024-11-20 07:48:22.320639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:04.753 [2024-11-20 07:48:22.322788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:04.753 [2024-11-20 07:48:22.322919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:04.753 [2024-11-20 07:48:22.323081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:04.753 [2024-11-20 07:48:22.323084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.753 [2024-11-20 07:48:22.402274] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:04.753 [2024-11-20 07:48:22.403358] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:04.753 [2024-11-20 07:48:22.403712] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:04.753 [2024-11-20 07:48:22.404129] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:04.753 [2024-11-20 07:48:22.404173] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:05.015 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:05.015 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:34:05.015 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:05.015 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:05.015 07:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.015 [2024-11-20 07:48:23.036206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.015 Malloc0 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.015 [2024-11-20 07:48:23.128550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:05.015 test case1: single bdev can't be used in multiple subsystems 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.015 [2024-11-20 07:48:23.163805] bdev.c:8318:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:05.015 [2024-11-20 07:48:23.163832] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:05.015 [2024-11-20 07:48:23.163841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.015 request: 00:34:05.015 { 00:34:05.015 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:05.015 "namespace": { 00:34:05.015 "bdev_name": "Malloc0", 00:34:05.015 "no_auto_visible": false 00:34:05.015 }, 00:34:05.015 "method": "nvmf_subsystem_add_ns", 00:34:05.015 "req_id": 1 00:34:05.015 } 00:34:05.015 Got JSON-RPC error response 00:34:05.015 response: 00:34:05.015 { 00:34:05.015 "code": -32602, 00:34:05.015 "message": "Invalid parameters" 00:34:05.015 } 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:05.015 Adding namespace failed - expected result. 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:05.015 test case2: host connect to nvmf target in multiple paths 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.015 [2024-11-20 07:48:23.175969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.015 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:05.586 07:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:06.157 07:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:06.157 07:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:34:06.157 07:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:06.157 07:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:34:06.157 07:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:34:08.071 07:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:08.071 07:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:08.071 07:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:08.071 07:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:34:08.071 07:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:08.071 07:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:34:08.071 07:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:08.071 [global] 00:34:08.071 thread=1 00:34:08.071 invalidate=1 00:34:08.071 rw=write 00:34:08.071 time_based=1 00:34:08.071 runtime=1 00:34:08.071 ioengine=libaio 00:34:08.071 direct=1 00:34:08.071 bs=4096 00:34:08.071 iodepth=1 00:34:08.071 norandommap=0 00:34:08.071 numjobs=1 00:34:08.071 00:34:08.071 verify_dump=1 00:34:08.071 verify_backlog=512 00:34:08.071 verify_state_save=0 00:34:08.071 do_verify=1 00:34:08.071 verify=crc32c-intel 00:34:08.071 [job0] 00:34:08.071 filename=/dev/nvme0n1 00:34:08.071 Could not set queue depth (nvme0n1) 00:34:08.332 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:08.332 fio-3.35 00:34:08.332 Starting 1 thread 00:34:09.717 00:34:09.717 job0: (groupid=0, jobs=1): err= 0: pid=3668148: Wed Nov 20 07:48:27 2024 00:34:09.717 read: IOPS=673, BW=2693KiB/s (2758kB/s)(2696KiB/1001msec) 00:34:09.717 slat (nsec): min=6836, max=57532, avg=22858.91, stdev=7911.13 00:34:09.717 clat (usec): min=551, max=1005, avg=806.83, stdev=69.28 00:34:09.717 lat (usec): min=558, max=1031, avg=829.69, stdev=72.97 00:34:09.717 clat percentiles (usec): 00:34:09.717 | 1.00th=[ 652], 5.00th=[ 668], 10.00th=[ 693], 20.00th=[ 758], 00:34:09.717 | 30.00th=[ 783], 40.00th=[ 799], 50.00th=[ 816], 60.00th=[ 832], 00:34:09.717 | 70.00th=[ 857], 80.00th=[ 865], 90.00th=[ 889], 95.00th=[ 898], 00:34:09.717 | 99.00th=[ 930], 99.50th=[ 938], 99.90th=[ 1004], 99.95th=[ 1004], 00:34:09.717 | 99.99th=[ 1004] 00:34:09.717 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:34:09.717 slat (nsec): min=9669, max=68958, avg=25909.83, stdev=10879.12 00:34:09.717 clat (usec): min=198, max=615, avg=394.28, stdev=57.26 00:34:09.717 lat (usec): min=209, max=648, avg=420.19, stdev=61.24 00:34:09.717 clat percentiles (usec): 00:34:09.717 | 1.00th=[ 245], 5.00th=[ 306], 10.00th=[ 322], 20.00th=[ 334], 00:34:09.717 | 30.00th=[ 347], 40.00th=[ 400], 50.00th=[ 416], 60.00th=[ 424], 00:34:09.717 | 70.00th=[ 429], 80.00th=[ 441], 90.00th=[ 453], 95.00th=[ 469], 00:34:09.717 | 99.00th=[ 498], 99.50th=[ 515], 99.90th=[ 553], 99.95th=[ 619], 00:34:09.717 | 99.99th=[ 619] 00:34:09.717 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:09.717 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:09.717 lat (usec) : 250=0.88%, 500=58.83%, 750=7.95%, 1000=32.27% 00:34:09.717 lat (msec) : 2=0.06% 00:34:09.717 cpu : usr=2.60%, sys=3.90%, ctx=1698, majf=0, minf=1 00:34:09.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.717 issued rwts: total=674,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:09.717 00:34:09.717 Run status group 0 (all jobs): 00:34:09.717 READ: bw=2693KiB/s (2758kB/s), 2693KiB/s-2693KiB/s (2758kB/s-2758kB/s), io=2696KiB (2761kB), run=1001-1001msec 00:34:09.717 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:34:09.717 00:34:09.717 Disk stats (read/write): 00:34:09.717 nvme0n1: ios=599/1024, merge=0/0, ticks=813/391, in_queue=1204, util=97.60% 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:09.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:09.718 rmmod nvme_tcp 00:34:09.718 rmmod nvme_fabrics 00:34:09.718 rmmod nvme_keyring 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3667095 ']' 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3667095 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3667095 ']' 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3667095 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:09.718 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3667095 00:34:09.978 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:09.978 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:09.978 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3667095' 00:34:09.978 killing process with pid 3667095 00:34:09.978 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3667095 00:34:09.978 07:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3667095 00:34:09.978 07:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:09.978 07:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:09.978 07:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:09.978 07:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:09.978 07:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:09.978 07:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:09.979 07:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:09.979 07:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:09.979 07:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:09.979 07:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.979 07:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.979 07:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:12.525 00:34:12.525 real 0m15.890s 00:34:12.525 user 0m35.556s 00:34:12.525 sys 0m7.756s 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:12.525 ************************************ 00:34:12.525 END TEST nvmf_nmic 00:34:12.525 ************************************ 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:12.525 ************************************ 00:34:12.525 START TEST nvmf_fio_target 00:34:12.525 ************************************ 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:12.525 * Looking for test storage... 00:34:12.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:12.525 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:12.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.526 --rc genhtml_branch_coverage=1 00:34:12.526 --rc genhtml_function_coverage=1 00:34:12.526 --rc genhtml_legend=1 00:34:12.526 --rc geninfo_all_blocks=1 00:34:12.526 --rc geninfo_unexecuted_blocks=1 00:34:12.526 00:34:12.526 ' 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:12.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.526 --rc genhtml_branch_coverage=1 00:34:12.526 --rc genhtml_function_coverage=1 00:34:12.526 --rc genhtml_legend=1 00:34:12.526 --rc geninfo_all_blocks=1 00:34:12.526 --rc geninfo_unexecuted_blocks=1 00:34:12.526 00:34:12.526 ' 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:12.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.526 --rc genhtml_branch_coverage=1 00:34:12.526 --rc genhtml_function_coverage=1 00:34:12.526 --rc genhtml_legend=1 00:34:12.526 --rc geninfo_all_blocks=1 00:34:12.526 --rc geninfo_unexecuted_blocks=1 00:34:12.526 00:34:12.526 ' 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:12.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.526 --rc genhtml_branch_coverage=1 00:34:12.526 --rc genhtml_function_coverage=1 00:34:12.526 --rc genhtml_legend=1 00:34:12.526 --rc geninfo_all_blocks=1 00:34:12.526 --rc geninfo_unexecuted_blocks=1 00:34:12.526 00:34:12.526 ' 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:12.526 07:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:20.670 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:20.671 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:20.671 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:20.671 Found net devices under 0000:31:00.0: cvl_0_0 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:20.671 Found net devices under 0000:31:00.1: cvl_0_1 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:20.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:20.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:34:20.671 00:34:20.671 --- 10.0.0.2 ping statistics --- 00:34:20.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.671 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:20.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:20.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:34:20.671 00:34:20.671 --- 10.0.0.1 ping statistics --- 00:34:20.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.671 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3672521 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3672521 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3672521 ']' 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:20.671 07:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:20.672 [2024-11-20 07:48:37.924226] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:20.672 [2024-11-20 07:48:37.925406] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:34:20.672 [2024-11-20 07:48:37.925459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:20.672 [2024-11-20 07:48:38.024430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:20.672 [2024-11-20 07:48:38.077648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:20.672 [2024-11-20 07:48:38.077700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:20.672 [2024-11-20 07:48:38.077708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:20.672 [2024-11-20 07:48:38.077720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:20.672 [2024-11-20 07:48:38.077726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:20.672 [2024-11-20 07:48:38.079786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.672 [2024-11-20 07:48:38.079882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:20.672 [2024-11-20 07:48:38.080180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:20.672 [2024-11-20 07:48:38.080186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.672 [2024-11-20 07:48:38.158789] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:20.672 [2024-11-20 07:48:38.160112] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:20.672 [2024-11-20 07:48:38.160415] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:20.672 [2024-11-20 07:48:38.160647] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:20.672 [2024-11-20 07:48:38.160693] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:20.672 07:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:20.672 07:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:34:20.672 07:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:20.672 07:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:20.672 07:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:20.672 07:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:20.672 07:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:20.934 [2024-11-20 07:48:38.953261] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.934 07:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:21.197 07:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:21.197 07:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:21.458 07:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:21.458 07:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:21.458 07:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:21.458 07:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:21.718 07:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:21.718 07:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:21.980 07:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:22.242 07:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:22.242 07:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:22.503 07:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:22.503 07:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:22.503 07:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:22.503 07:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:22.764 07:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:23.025 07:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:23.025 07:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:23.025 07:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:23.025 07:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:23.286 07:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:23.547 [2024-11-20 07:48:41.561147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.547 07:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:23.808 07:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:23.808 07:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:24.381 07:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:24.381 07:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:34:24.381 07:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:24.381 07:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:34:24.381 07:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:34:24.381 07:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:34:26.293 07:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:26.293 07:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:26.293 07:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:26.293 07:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:34:26.293 07:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:26.293 07:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:34:26.293 07:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:26.293 [global] 00:34:26.293 thread=1 00:34:26.293 invalidate=1 00:34:26.293 rw=write 00:34:26.293 time_based=1 00:34:26.293 runtime=1 00:34:26.293 ioengine=libaio 00:34:26.293 direct=1 00:34:26.293 bs=4096 00:34:26.293 iodepth=1 00:34:26.293 norandommap=0 00:34:26.293 numjobs=1 00:34:26.293 00:34:26.293 verify_dump=1 00:34:26.293 verify_backlog=512 00:34:26.293 verify_state_save=0 00:34:26.293 do_verify=1 00:34:26.293 verify=crc32c-intel 00:34:26.293 [job0] 00:34:26.293 filename=/dev/nvme0n1 00:34:26.293 [job1] 00:34:26.293 filename=/dev/nvme0n2 00:34:26.293 [job2] 00:34:26.293 filename=/dev/nvme0n3 00:34:26.293 [job3] 00:34:26.293 filename=/dev/nvme0n4 00:34:26.571 Could not set queue depth (nvme0n1) 00:34:26.571 Could not set queue depth (nvme0n2) 00:34:26.571 Could not set queue depth (nvme0n3) 00:34:26.571 Could not set queue depth (nvme0n4) 00:34:26.834 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:26.834 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:26.834 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:26.834 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:26.834 fio-3.35 00:34:26.834 Starting 4 threads 00:34:28.240 00:34:28.240 job0: (groupid=0, jobs=1): err= 0: pid=3674102: Wed Nov 20 07:48:46 2024 00:34:28.240 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:28.240 slat (nsec): min=7471, max=60268, avg=25116.96, stdev=2873.43 00:34:28.240 clat (usec): min=632, max=1374, avg=1038.71, stdev=125.37 00:34:28.240 lat (usec): min=658, max=1399, avg=1063.83, stdev=125.54 00:34:28.240 clat percentiles (usec): 00:34:28.240 | 1.00th=[ 709], 5.00th=[ 807], 10.00th=[ 881], 20.00th=[ 947], 00:34:28.240 | 30.00th=[ 979], 40.00th=[ 1012], 50.00th=[ 1045], 60.00th=[ 1074], 00:34:28.240 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1188], 95.00th=[ 1237], 00:34:28.240 | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[ 1369], 99.95th=[ 1369], 00:34:28.240 | 99.99th=[ 1369] 00:34:28.240 write: IOPS=683, BW=2733KiB/s (2799kB/s)(2736KiB/1001msec); 0 zone resets 00:34:28.240 slat (nsec): min=9574, max=52929, avg=31162.79, stdev=6864.22 00:34:28.240 clat (usec): min=215, max=1099, avg=620.47, stdev=143.19 00:34:28.240 lat (usec): min=248, max=1147, avg=651.63, stdev=145.28 00:34:28.240 clat percentiles (usec): 00:34:28.240 | 1.00th=[ 281], 5.00th=[ 375], 10.00th=[ 433], 20.00th=[ 490], 00:34:28.240 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 635], 60.00th=[ 668], 00:34:28.240 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 799], 95.00th=[ 848], 00:34:28.240 | 99.00th=[ 906], 99.50th=[ 963], 99.90th=[ 1106], 99.95th=[ 1106], 00:34:28.240 | 99.99th=[ 1106] 00:34:28.240 bw ( KiB/s): min= 4096, max= 4096, per=41.05%, avg=4096.00, stdev= 0.00, samples=1 00:34:28.240 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:28.240 lat (usec) : 250=0.08%, 500=12.37%, 750=35.45%, 1000=24.50% 00:34:28.240 lat (msec) : 2=27.59% 00:34:28.240 cpu : usr=1.90%, sys=3.40%, ctx=1196, majf=0, minf=1 00:34:28.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.240 issued rwts: total=512,684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.240 job1: (groupid=0, jobs=1): err= 0: pid=3674103: Wed Nov 20 07:48:46 2024 00:34:28.240 read: IOPS=17, BW=70.5KiB/s (72.2kB/s)(72.0KiB/1021msec) 00:34:28.240 slat (nsec): min=7643, max=26477, avg=24162.22, stdev=5661.48 00:34:28.240 clat (usec): min=1096, max=42046, avg=37391.73, stdev=13186.33 00:34:28.240 lat (usec): min=1105, max=42073, avg=37415.89, stdev=13188.70 00:34:28.240 clat percentiles (usec): 00:34:28.240 | 1.00th=[ 1090], 5.00th=[ 1090], 10.00th=[ 1205], 20.00th=[41681], 00:34:28.240 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:28.240 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:28.240 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:28.240 | 99.99th=[42206] 00:34:28.240 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:34:28.240 slat (nsec): min=10266, max=52341, avg=30755.93, stdev=9246.47 00:34:28.240 clat (usec): min=284, max=929, avg=639.97, stdev=113.79 00:34:28.240 lat (usec): min=319, max=966, avg=670.73, stdev=117.89 00:34:28.240 clat percentiles (usec): 00:34:28.240 | 1.00th=[ 359], 5.00th=[ 437], 10.00th=[ 482], 20.00th=[ 553], 00:34:28.240 | 30.00th=[ 594], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 685], 00:34:28.240 | 70.00th=[ 709], 80.00th=[ 734], 90.00th=[ 766], 95.00th=[ 807], 00:34:28.240 | 99.00th=[ 865], 99.50th=[ 914], 99.90th=[ 930], 99.95th=[ 930], 00:34:28.240 | 99.99th=[ 930] 00:34:28.240 bw ( KiB/s): min= 4096, max= 4096, per=41.05%, avg=4096.00, stdev= 0.00, samples=1 00:34:28.240 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:28.240 lat (usec) : 500=14.91%, 750=66.98%, 1000=14.72% 00:34:28.240 lat (msec) : 2=0.38%, 50=3.02% 00:34:28.240 cpu : usr=0.39%, sys=1.86%, ctx=531, majf=0, minf=1 00:34:28.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.240 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.240 job2: (groupid=0, jobs=1): err= 0: pid=3674104: Wed Nov 20 07:48:46 2024 00:34:28.240 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:28.240 slat (nsec): min=26882, max=58097, avg=28021.28, stdev=3613.10 00:34:28.240 clat (usec): min=808, max=1423, avg=1145.61, stdev=86.60 00:34:28.240 lat (usec): min=836, max=1451, avg=1173.63, stdev=86.74 00:34:28.240 clat percentiles (usec): 00:34:28.240 | 1.00th=[ 914], 5.00th=[ 979], 10.00th=[ 1037], 20.00th=[ 1090], 00:34:28.240 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1172], 00:34:28.240 | 70.00th=[ 1188], 80.00th=[ 1221], 90.00th=[ 1237], 95.00th=[ 1270], 00:34:28.240 | 99.00th=[ 1303], 99.50th=[ 1352], 99.90th=[ 1418], 99.95th=[ 1418], 00:34:28.240 | 99.99th=[ 1418] 00:34:28.240 write: IOPS=594, BW=2378KiB/s (2435kB/s)(2380KiB/1001msec); 0 zone resets 00:34:28.240 slat (nsec): min=9370, max=63910, avg=31022.27, stdev=9897.89 00:34:28.240 clat (usec): min=187, max=3510, avg=625.64, stdev=173.11 00:34:28.240 lat (usec): min=223, max=3545, avg=656.66, stdev=176.13 00:34:28.240 clat percentiles (usec): 00:34:28.240 | 1.00th=[ 338], 5.00th=[ 408], 10.00th=[ 445], 20.00th=[ 498], 00:34:28.240 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 635], 60.00th=[ 668], 00:34:28.240 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 799], 00:34:28.240 | 99.00th=[ 922], 99.50th=[ 979], 99.90th=[ 3523], 99.95th=[ 3523], 00:34:28.240 | 99.99th=[ 3523] 00:34:28.240 bw ( KiB/s): min= 4096, max= 4096, per=41.05%, avg=4096.00, stdev= 0.00, samples=1 00:34:28.240 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:28.240 lat (usec) : 250=0.09%, 500=10.93%, 750=34.42%, 1000=11.20% 00:34:28.240 lat (msec) : 2=43.27%, 4=0.09% 00:34:28.240 cpu : usr=1.90%, sys=4.80%, ctx=1108, majf=0, minf=1 00:34:28.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.240 issued rwts: total=512,595,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.240 job3: (groupid=0, jobs=1): err= 0: pid=3674105: Wed Nov 20 07:48:46 2024 00:34:28.240 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:28.240 slat (nsec): min=11405, max=62562, avg=27328.86, stdev=3572.96 00:34:28.240 clat (usec): min=592, max=1299, avg=1006.84, stdev=111.30 00:34:28.240 lat (usec): min=619, max=1326, avg=1034.16, stdev=111.32 00:34:28.240 clat percentiles (usec): 00:34:28.240 | 1.00th=[ 701], 5.00th=[ 816], 10.00th=[ 865], 20.00th=[ 922], 00:34:28.240 | 30.00th=[ 955], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1037], 00:34:28.240 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1188], 00:34:28.240 | 99.00th=[ 1254], 99.50th=[ 1303], 99.90th=[ 1303], 99.95th=[ 1303], 00:34:28.240 | 99.99th=[ 1303] 00:34:28.240 write: IOPS=755, BW=3021KiB/s (3093kB/s)(3024KiB/1001msec); 0 zone resets 00:34:28.240 slat (nsec): min=9622, max=68340, avg=31462.67, stdev=8765.71 00:34:28.240 clat (usec): min=210, max=985, avg=578.47, stdev=126.10 00:34:28.240 lat (usec): min=221, max=1020, avg=609.93, stdev=128.96 00:34:28.240 clat percentiles (usec): 00:34:28.240 | 1.00th=[ 265], 5.00th=[ 367], 10.00th=[ 408], 20.00th=[ 474], 00:34:28.240 | 30.00th=[ 515], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 611], 00:34:28.240 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 734], 95.00th=[ 783], 00:34:28.240 | 99.00th=[ 840], 99.50th=[ 906], 99.90th=[ 988], 99.95th=[ 988], 00:34:28.240 | 99.99th=[ 988] 00:34:28.240 bw ( KiB/s): min= 4096, max= 4096, per=41.05%, avg=4096.00, stdev= 0.00, samples=1 00:34:28.240 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:28.240 lat (usec) : 250=0.39%, 500=15.38%, 750=40.06%, 1000=22.08% 00:34:28.240 lat (msec) : 2=22.08% 00:34:28.240 cpu : usr=2.70%, sys=5.00%, ctx=1268, majf=0, minf=1 00:34:28.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.240 issued rwts: total=512,756,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.240 00:34:28.240 Run status group 0 (all jobs): 00:34:28.240 READ: bw=6088KiB/s (6234kB/s), 70.5KiB/s-2046KiB/s (72.2kB/s-2095kB/s), io=6216KiB (6365kB), run=1001-1021msec 00:34:28.240 WRITE: bw=9978KiB/s (10.2MB/s), 2006KiB/s-3021KiB/s (2054kB/s-3093kB/s), io=9.95MiB (10.4MB), run=1001-1021msec 00:34:28.240 00:34:28.240 Disk stats (read/write): 00:34:28.240 nvme0n1: ios=515/512, merge=0/0, ticks=856/298, in_queue=1154, util=91.78% 00:34:28.240 nvme0n2: ios=63/512, merge=0/0, ticks=1433/324, in_queue=1757, util=97.35% 00:34:28.240 nvme0n3: ios=423/512, merge=0/0, ticks=436/264, in_queue=700, util=88.53% 00:34:28.240 nvme0n4: ios=501/512, merge=0/0, ticks=460/226, in_queue=686, util=89.47% 00:34:28.240 07:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:28.240 [global] 00:34:28.241 thread=1 00:34:28.241 invalidate=1 00:34:28.241 rw=randwrite 00:34:28.241 time_based=1 00:34:28.241 runtime=1 00:34:28.241 ioengine=libaio 00:34:28.241 direct=1 00:34:28.241 bs=4096 00:34:28.241 iodepth=1 00:34:28.241 norandommap=0 00:34:28.241 numjobs=1 00:34:28.241 00:34:28.241 verify_dump=1 00:34:28.241 verify_backlog=512 00:34:28.241 verify_state_save=0 00:34:28.241 do_verify=1 00:34:28.241 verify=crc32c-intel 00:34:28.241 [job0] 00:34:28.241 filename=/dev/nvme0n1 00:34:28.241 [job1] 00:34:28.241 filename=/dev/nvme0n2 00:34:28.241 [job2] 00:34:28.241 filename=/dev/nvme0n3 00:34:28.241 [job3] 00:34:28.241 filename=/dev/nvme0n4 00:34:28.241 Could not set queue depth (nvme0n1) 00:34:28.241 Could not set queue depth (nvme0n2) 00:34:28.241 Could not set queue depth (nvme0n3) 00:34:28.241 Could not set queue depth (nvme0n4) 00:34:28.506 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:28.506 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:28.506 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:28.506 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:28.506 fio-3.35 00:34:28.506 Starting 4 threads 00:34:29.910 00:34:29.910 job0: (groupid=0, jobs=1): err= 0: pid=3674627: Wed Nov 20 07:48:47 2024 00:34:29.910 read: IOPS=42, BW=170KiB/s (174kB/s)(172KiB/1010msec) 00:34:29.910 slat (nsec): min=7452, max=29826, avg=24179.88, stdev=5626.98 00:34:29.910 clat (usec): min=548, max=41989, avg=17554.97, stdev=19780.08 00:34:29.910 lat (usec): min=574, max=42015, avg=17579.15, stdev=19781.49 00:34:29.910 clat percentiles (usec): 00:34:29.910 | 1.00th=[ 545], 5.00th=[ 586], 10.00th=[ 611], 20.00th=[ 668], 00:34:29.910 | 30.00th=[ 725], 40.00th=[ 799], 50.00th=[ 963], 60.00th=[19006], 00:34:29.910 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:34:29.910 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:29.910 | 99.99th=[42206] 00:34:29.910 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:34:29.910 slat (nsec): min=9568, max=50938, avg=29555.64, stdev=8735.77 00:34:29.910 clat (usec): min=172, max=753, avg=456.63, stdev=85.57 00:34:29.910 lat (usec): min=221, max=765, avg=486.19, stdev=87.61 00:34:29.910 clat percentiles (usec): 00:34:29.910 | 1.00th=[ 269], 5.00th=[ 318], 10.00th=[ 347], 20.00th=[ 379], 00:34:29.910 | 30.00th=[ 412], 40.00th=[ 445], 50.00th=[ 461], 60.00th=[ 478], 00:34:29.910 | 70.00th=[ 498], 80.00th=[ 529], 90.00th=[ 570], 95.00th=[ 586], 00:34:29.910 | 99.00th=[ 660], 99.50th=[ 693], 99.90th=[ 750], 99.95th=[ 750], 00:34:29.910 | 99.99th=[ 750] 00:34:29.910 bw ( KiB/s): min= 4096, max= 4096, per=33.02%, avg=4096.00, stdev= 0.00, samples=1 00:34:29.910 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:29.910 lat (usec) : 250=0.54%, 500=65.95%, 750=28.11%, 1000=1.62% 00:34:29.910 lat (msec) : 2=0.36%, 20=0.36%, 50=3.06% 00:34:29.910 cpu : usr=0.99%, sys=1.39%, ctx=556, majf=0, minf=1 00:34:29.910 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.910 issued rwts: total=43,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.910 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:29.911 job1: (groupid=0, jobs=1): err= 0: pid=3674628: Wed Nov 20 07:48:47 2024 00:34:29.911 read: IOPS=528, BW=2113KiB/s (2164kB/s)(2132KiB/1009msec) 00:34:29.911 slat (nsec): min=7186, max=59694, avg=24531.66, stdev=7963.28 00:34:29.911 clat (usec): min=473, max=41202, avg=894.93, stdev=1835.12 00:34:29.911 lat (usec): min=482, max=41232, avg=919.46, stdev=1835.46 00:34:29.911 clat percentiles (usec): 00:34:29.911 | 1.00th=[ 578], 5.00th=[ 652], 10.00th=[ 676], 20.00th=[ 725], 00:34:29.911 | 30.00th=[ 766], 40.00th=[ 791], 50.00th=[ 807], 60.00th=[ 824], 00:34:29.911 | 70.00th=[ 840], 80.00th=[ 857], 90.00th=[ 889], 95.00th=[ 930], 00:34:29.911 | 99.00th=[ 979], 99.50th=[ 1090], 99.90th=[41157], 99.95th=[41157], 00:34:29.911 | 99.99th=[41157] 00:34:29.911 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:34:29.911 slat (nsec): min=9838, max=94321, avg=27854.24, stdev=11041.35 00:34:29.911 clat (usec): min=145, max=1021, avg=467.60, stdev=124.82 00:34:29.911 lat (usec): min=160, max=1054, avg=495.45, stdev=126.82 00:34:29.911 clat percentiles (usec): 00:34:29.911 | 1.00th=[ 269], 5.00th=[ 297], 10.00th=[ 326], 20.00th=[ 367], 00:34:29.911 | 30.00th=[ 408], 40.00th=[ 437], 50.00th=[ 461], 60.00th=[ 478], 00:34:29.911 | 70.00th=[ 502], 80.00th=[ 537], 90.00th=[ 603], 95.00th=[ 725], 00:34:29.911 | 99.00th=[ 898], 99.50th=[ 914], 99.90th=[ 988], 99.95th=[ 1020], 00:34:29.911 | 99.99th=[ 1020] 00:34:29.911 bw ( KiB/s): min= 4096, max= 4096, per=33.02%, avg=4096.00, stdev= 0.00, samples=2 00:34:29.911 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:34:29.911 lat (usec) : 250=0.26%, 500=45.86%, 750=25.95%, 1000=27.62% 00:34:29.911 lat (msec) : 2=0.19%, 20=0.06%, 50=0.06% 00:34:29.911 cpu : usr=2.48%, sys=3.87%, ctx=1560, majf=0, minf=1 00:34:29.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.911 issued rwts: total=533,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:29.911 job2: (groupid=0, jobs=1): err= 0: pid=3674629: Wed Nov 20 07:48:47 2024 00:34:29.911 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:29.911 slat (nsec): min=8652, max=46637, avg=27081.53, stdev=3707.18 00:34:29.911 clat (usec): min=585, max=1301, avg=1063.45, stdev=105.85 00:34:29.911 lat (usec): min=612, max=1328, avg=1090.53, stdev=106.48 00:34:29.911 clat percentiles (usec): 00:34:29.911 | 1.00th=[ 734], 5.00th=[ 848], 10.00th=[ 914], 20.00th=[ 996], 00:34:29.911 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1106], 00:34:29.911 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:34:29.911 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1303], 99.95th=[ 1303], 00:34:29.911 | 99.99th=[ 1303] 00:34:29.911 write: IOPS=652, BW=2609KiB/s (2672kB/s)(2612KiB/1001msec); 0 zone resets 00:34:29.911 slat (nsec): min=10229, max=68470, avg=30925.74, stdev=9110.81 00:34:29.911 clat (usec): min=272, max=1077, avg=630.85, stdev=127.22 00:34:29.911 lat (usec): min=284, max=1111, avg=661.78, stdev=130.57 00:34:29.911 clat percentiles (usec): 00:34:29.911 | 1.00th=[ 326], 5.00th=[ 396], 10.00th=[ 453], 20.00th=[ 519], 00:34:29.911 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 676], 00:34:29.911 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 783], 95.00th=[ 816], 00:34:29.911 | 99.00th=[ 865], 99.50th=[ 914], 99.90th=[ 1074], 99.95th=[ 1074], 00:34:29.911 | 99.99th=[ 1074] 00:34:29.911 bw ( KiB/s): min= 4104, max= 4104, per=33.08%, avg=4104.00, stdev= 0.00, samples=1 00:34:29.911 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:34:29.911 lat (usec) : 500=9.79%, 750=36.57%, 1000=18.71% 00:34:29.911 lat (msec) : 2=34.94% 00:34:29.911 cpu : usr=2.10%, sys=3.20%, ctx=1166, majf=0, minf=1 00:34:29.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.911 issued rwts: total=512,653,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:29.911 job3: (groupid=0, jobs=1): err= 0: pid=3674630: Wed Nov 20 07:48:47 2024 00:34:29.911 read: IOPS=528, BW=2116KiB/s (2167kB/s)(2192KiB/1036msec) 00:34:29.911 slat (nsec): min=7191, max=53346, avg=23219.66, stdev=7916.71 00:34:29.911 clat (usec): min=467, max=41934, avg=961.80, stdev=2462.67 00:34:29.911 lat (usec): min=494, max=41958, avg=985.02, stdev=2462.96 00:34:29.911 clat percentiles (usec): 00:34:29.911 | 1.00th=[ 553], 5.00th=[ 660], 10.00th=[ 693], 20.00th=[ 742], 00:34:29.911 | 30.00th=[ 783], 40.00th=[ 799], 50.00th=[ 824], 60.00th=[ 840], 00:34:29.911 | 70.00th=[ 857], 80.00th=[ 881], 90.00th=[ 914], 95.00th=[ 938], 00:34:29.911 | 99.00th=[ 1004], 99.50th=[ 2802], 99.90th=[41681], 99.95th=[41681], 00:34:29.911 | 99.99th=[41681] 00:34:29.911 write: IOPS=988, BW=3954KiB/s (4049kB/s)(4096KiB/1036msec); 0 zone resets 00:34:29.911 slat (nsec): min=9664, max=51796, avg=27913.65, stdev=9553.93 00:34:29.911 clat (usec): min=165, max=689, avg=445.48, stdev=80.43 00:34:29.911 lat (usec): min=198, max=722, avg=473.39, stdev=84.37 00:34:29.911 clat percentiles (usec): 00:34:29.911 | 1.00th=[ 262], 5.00th=[ 306], 10.00th=[ 334], 20.00th=[ 371], 00:34:29.911 | 30.00th=[ 412], 40.00th=[ 437], 50.00th=[ 453], 60.00th=[ 469], 00:34:29.911 | 70.00th=[ 486], 80.00th=[ 506], 90.00th=[ 537], 95.00th=[ 578], 00:34:29.911 | 99.00th=[ 652], 99.50th=[ 660], 99.90th=[ 685], 99.95th=[ 693], 00:34:29.911 | 99.99th=[ 693] 00:34:29.911 bw ( KiB/s): min= 4096, max= 4096, per=33.02%, avg=4096.00, stdev= 0.00, samples=2 00:34:29.911 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:34:29.911 lat (usec) : 250=0.38%, 500=50.45%, 750=21.88%, 1000=26.84% 00:34:29.911 lat (msec) : 2=0.25%, 4=0.06%, 50=0.13% 00:34:29.911 cpu : usr=1.93%, sys=4.25%, ctx=1572, majf=0, minf=1 00:34:29.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.911 issued rwts: total=548,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:29.911 00:34:29.911 Run status group 0 (all jobs): 00:34:29.911 READ: bw=6317KiB/s (6468kB/s), 170KiB/s-2116KiB/s (174kB/s-2167kB/s), io=6544KiB (6701kB), run=1001-1036msec 00:34:29.911 WRITE: bw=12.1MiB/s (12.7MB/s), 2028KiB/s-4059KiB/s (2076kB/s-4157kB/s), io=12.6MiB (13.2MB), run=1001-1036msec 00:34:29.911 00:34:29.911 Disk stats (read/write): 00:34:29.911 nvme0n1: ios=69/512, merge=0/0, ticks=645/223, in_queue=868, util=88.78% 00:34:29.911 nvme0n2: ios=557/821, merge=0/0, ticks=817/381, in_queue=1198, util=95.63% 00:34:29.911 nvme0n3: ios=497/512, merge=0/0, ticks=1442/321, in_queue=1763, util=99.37% 00:34:29.911 nvme0n4: ios=569/854, merge=0/0, ticks=500/381, in_queue=881, util=94.91% 00:34:29.911 07:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:29.911 [global] 00:34:29.911 thread=1 00:34:29.911 invalidate=1 00:34:29.911 rw=write 00:34:29.911 time_based=1 00:34:29.911 runtime=1 00:34:29.911 ioengine=libaio 00:34:29.911 direct=1 00:34:29.911 bs=4096 00:34:29.911 iodepth=128 00:34:29.911 norandommap=0 00:34:29.911 numjobs=1 00:34:29.911 00:34:29.911 verify_dump=1 00:34:29.911 verify_backlog=512 00:34:29.911 verify_state_save=0 00:34:29.911 do_verify=1 00:34:29.911 verify=crc32c-intel 00:34:29.911 [job0] 00:34:29.911 filename=/dev/nvme0n1 00:34:29.911 [job1] 00:34:29.911 filename=/dev/nvme0n2 00:34:29.911 [job2] 00:34:29.911 filename=/dev/nvme0n3 00:34:29.911 [job3] 00:34:29.911 filename=/dev/nvme0n4 00:34:29.911 Could not set queue depth (nvme0n1) 00:34:29.911 Could not set queue depth (nvme0n2) 00:34:29.911 Could not set queue depth (nvme0n3) 00:34:29.911 Could not set queue depth (nvme0n4) 00:34:30.177 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.177 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.177 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.177 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.177 fio-3.35 00:34:30.177 Starting 4 threads 00:34:31.581 00:34:31.581 job0: (groupid=0, jobs=1): err= 0: pid=3675146: Wed Nov 20 07:48:49 2024 00:34:31.581 read: IOPS=6643, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:34:31.581 slat (nsec): min=892, max=15851k, avg=72560.65, stdev=440872.33 00:34:31.581 clat (usec): min=1443, max=46690, avg=9288.23, stdev=4172.87 00:34:31.581 lat (usec): min=1445, max=46701, avg=9360.80, stdev=4207.48 00:34:31.581 clat percentiles (usec): 00:34:31.581 | 1.00th=[ 6652], 5.00th=[ 7308], 10.00th=[ 7439], 20.00th=[ 7701], 00:34:31.581 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8717], 00:34:31.581 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9896], 95.00th=[11731], 00:34:31.581 | 99.00th=[33817], 99.50th=[39060], 99.90th=[44827], 99.95th=[44827], 00:34:31.581 | 99.99th=[46924] 00:34:31.581 write: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec); 0 zone resets 00:34:31.581 slat (nsec): min=1530, max=5085.3k, avg=69563.61, stdev=304571.06 00:34:31.581 clat (usec): min=1447, max=37234, avg=9077.68, stdev=3327.89 00:34:31.581 lat (usec): min=1450, max=37246, avg=9147.24, stdev=3354.70 00:34:31.581 clat percentiles (usec): 00:34:31.581 | 1.00th=[ 5407], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 7701], 00:34:31.581 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8455], 00:34:31.581 | 70.00th=[ 8717], 80.00th=[ 9241], 90.00th=[12387], 95.00th=[14877], 00:34:31.581 | 99.00th=[28181], 99.50th=[33817], 99.90th=[36439], 99.95th=[36963], 00:34:31.581 | 99.99th=[37487] 00:34:31.581 bw ( KiB/s): min=25600, max=30736, per=25.02%, avg=28168.00, stdev=3631.70, samples=2 00:34:31.581 iops : min= 6400, max= 7684, avg=7042.00, stdev=907.93, samples=2 00:34:31.581 lat (msec) : 2=0.07%, 4=0.30%, 10=85.84%, 20=11.85%, 50=1.93% 00:34:31.581 cpu : usr=2.70%, sys=3.80%, ctx=949, majf=0, minf=1 00:34:31.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:31.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:31.581 issued rwts: total=6657,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:31.581 job1: (groupid=0, jobs=1): err= 0: pid=3675147: Wed Nov 20 07:48:49 2024 00:34:31.581 read: IOPS=7090, BW=27.7MiB/s (29.0MB/s)(27.9MiB/1009msec) 00:34:31.581 slat (nsec): min=904, max=12073k, avg=73829.26, stdev=538308.12 00:34:31.581 clat (usec): min=1467, max=33119, avg=9441.94, stdev=4304.57 00:34:31.581 lat (usec): min=4441, max=33144, avg=9515.77, stdev=4349.09 00:34:31.581 clat percentiles (usec): 00:34:31.581 | 1.00th=[ 5669], 5.00th=[ 6390], 10.00th=[ 6980], 20.00th=[ 7570], 00:34:31.581 | 30.00th=[ 7898], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8586], 00:34:31.581 | 70.00th=[ 8717], 80.00th=[ 9241], 90.00th=[11076], 95.00th=[20579], 00:34:31.581 | 99.00th=[29754], 99.50th=[32113], 99.90th=[32375], 99.95th=[32375], 00:34:31.581 | 99.99th=[33162] 00:34:31.581 write: IOPS=7104, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1009msec); 0 zone resets 00:34:31.581 slat (nsec): min=1551, max=17141k, avg=62113.53, stdev=408406.42 00:34:31.581 clat (usec): min=3035, max=27947, avg=8395.41, stdev=2475.89 00:34:31.581 lat (usec): min=3037, max=27980, avg=8457.52, stdev=2496.77 00:34:31.581 clat percentiles (usec): 00:34:31.581 | 1.00th=[ 4817], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7504], 00:34:31.581 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8094], 60.00th=[ 8225], 00:34:31.581 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[10552], 00:34:31.581 | 99.00th=[23987], 99.50th=[24249], 99.90th=[24511], 99.95th=[24511], 00:34:31.581 | 99.99th=[27919] 00:34:31.581 bw ( KiB/s): min=28672, max=28672, per=25.47%, avg=28672.00, stdev= 0.00, samples=2 00:34:31.581 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=2 00:34:31.581 lat (msec) : 2=0.01%, 4=0.08%, 10=89.90%, 20=5.79%, 50=4.22% 00:34:31.581 cpu : usr=4.76%, sys=6.25%, ctx=600, majf=0, minf=2 00:34:31.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:31.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:31.581 issued rwts: total=7154,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:31.581 job2: (groupid=0, jobs=1): err= 0: pid=3675148: Wed Nov 20 07:48:49 2024 00:34:31.581 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:34:31.581 slat (nsec): min=945, max=4924.1k, avg=70489.64, stdev=466050.77 00:34:31.581 clat (usec): min=5274, max=16704, avg=9356.46, stdev=1325.73 00:34:31.581 lat (usec): min=5280, max=17745, avg=9426.95, stdev=1378.14 00:34:31.581 clat percentiles (usec): 00:34:31.581 | 1.00th=[ 5866], 5.00th=[ 7177], 10.00th=[ 7898], 20.00th=[ 8291], 00:34:31.581 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:34:31.581 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10683], 95.00th=[11469], 00:34:31.581 | 99.00th=[13566], 99.50th=[14484], 99.90th=[16712], 99.95th=[16712], 00:34:31.581 | 99.99th=[16712] 00:34:31.581 write: IOPS=6866, BW=26.8MiB/s (28.1MB/s)(26.9MiB/1004msec); 0 zone resets 00:34:31.581 slat (nsec): min=1573, max=7863.0k, avg=72390.93, stdev=464095.93 00:34:31.581 clat (usec): min=718, max=17921, avg=9398.87, stdev=1563.80 00:34:31.581 lat (usec): min=4259, max=17952, avg=9471.26, stdev=1615.64 00:34:31.581 clat percentiles (usec): 00:34:31.581 | 1.00th=[ 4817], 5.00th=[ 7570], 10.00th=[ 7898], 20.00th=[ 8225], 00:34:31.581 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:34:31.581 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[11076], 95.00th=[12911], 00:34:31.581 | 99.00th=[14615], 99.50th=[15008], 99.90th=[15926], 99.95th=[15926], 00:34:31.581 | 99.99th=[17957] 00:34:31.581 bw ( KiB/s): min=26032, max=28096, per=24.04%, avg=27064.00, stdev=1459.47, samples=2 00:34:31.581 iops : min= 6508, max= 7024, avg=6766.00, stdev=364.87, samples=2 00:34:31.581 lat (usec) : 750=0.01% 00:34:31.581 lat (msec) : 10=80.61%, 20=19.38% 00:34:31.581 cpu : usr=4.09%, sys=7.48%, ctx=487, majf=0, minf=1 00:34:31.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:31.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:31.581 issued rwts: total=6656,6894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:31.581 job3: (groupid=0, jobs=1): err= 0: pid=3675149: Wed Nov 20 07:48:49 2024 00:34:31.581 read: IOPS=6604, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1009msec) 00:34:31.581 slat (nsec): min=1004, max=9525.1k, avg=68651.15, stdev=570677.07 00:34:31.581 clat (usec): min=1986, max=18733, avg=9591.21, stdev=2487.78 00:34:31.581 lat (usec): min=1993, max=18744, avg=9659.86, stdev=2515.23 00:34:31.581 clat percentiles (usec): 00:34:31.581 | 1.00th=[ 4047], 5.00th=[ 5473], 10.00th=[ 7308], 20.00th=[ 8029], 00:34:31.581 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9634], 00:34:31.581 | 70.00th=[10552], 80.00th=[11469], 90.00th=[13173], 95.00th=[14484], 00:34:31.581 | 99.00th=[16188], 99.50th=[16909], 99.90th=[18482], 99.95th=[18482], 00:34:31.581 | 99.99th=[18744] 00:34:31.581 write: IOPS=7104, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1009msec); 0 zone resets 00:34:31.581 slat (nsec): min=1672, max=8235.2k, avg=60117.96, stdev=442087.75 00:34:31.581 clat (usec): min=590, max=19800, avg=8933.58, stdev=3061.82 00:34:31.581 lat (usec): min=601, max=19802, avg=8993.69, stdev=3081.84 00:34:31.581 clat percentiles (usec): 00:34:31.581 | 1.00th=[ 2040], 5.00th=[ 4752], 10.00th=[ 5211], 20.00th=[ 5866], 00:34:31.581 | 30.00th=[ 7439], 40.00th=[ 8356], 50.00th=[ 9110], 60.00th=[ 9241], 00:34:31.581 | 70.00th=[ 9765], 80.00th=[11863], 90.00th=[13304], 95.00th=[14353], 00:34:31.581 | 99.00th=[16057], 99.50th=[16909], 99.90th=[17957], 99.95th=[17957], 00:34:31.581 | 99.99th=[19792] 00:34:31.581 bw ( KiB/s): min=27712, max=28672, per=25.04%, avg=28192.00, stdev=678.82, samples=2 00:34:31.581 iops : min= 6928, max= 7168, avg=7048.00, stdev=169.71, samples=2 00:34:31.581 lat (usec) : 750=0.02% 00:34:31.581 lat (msec) : 2=0.46%, 4=1.68%, 10=67.13%, 20=30.71% 00:34:31.581 cpu : usr=5.65%, sys=6.85%, ctx=496, majf=0, minf=2 00:34:31.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:31.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:31.581 issued rwts: total=6664,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:31.581 00:34:31.581 Run status group 0 (all jobs): 00:34:31.581 READ: bw=105MiB/s (110MB/s), 25.8MiB/s-27.7MiB/s (27.1MB/s-29.0MB/s), io=106MiB (111MB), run=1002-1009msec 00:34:31.581 WRITE: bw=110MiB/s (115MB/s), 26.8MiB/s-27.9MiB/s (28.1MB/s-29.3MB/s), io=111MiB (116MB), run=1002-1009msec 00:34:31.581 00:34:31.581 Disk stats (read/write): 00:34:31.581 nvme0n1: ios=5682/6143, merge=0/0, ticks=17100/17605, in_queue=34705, util=87.78% 00:34:31.581 nvme0n2: ios=6176/6315, merge=0/0, ticks=26003/24141, in_queue=50144, util=87.28% 00:34:31.581 nvme0n3: ios=5462/5632, merge=0/0, ticks=25139/24956, in_queue=50095, util=88.56% 00:34:31.581 nvme0n4: ios=5764/6144, merge=0/0, ticks=51242/47760, in_queue=99002, util=100.00% 00:34:31.581 07:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:31.581 [global] 00:34:31.581 thread=1 00:34:31.582 invalidate=1 00:34:31.582 rw=randwrite 00:34:31.582 time_based=1 00:34:31.582 runtime=1 00:34:31.582 ioengine=libaio 00:34:31.582 direct=1 00:34:31.582 bs=4096 00:34:31.582 iodepth=128 00:34:31.582 norandommap=0 00:34:31.582 numjobs=1 00:34:31.582 00:34:31.582 verify_dump=1 00:34:31.582 verify_backlog=512 00:34:31.582 verify_state_save=0 00:34:31.582 do_verify=1 00:34:31.582 verify=crc32c-intel 00:34:31.582 [job0] 00:34:31.582 filename=/dev/nvme0n1 00:34:31.582 [job1] 00:34:31.582 filename=/dev/nvme0n2 00:34:31.582 [job2] 00:34:31.582 filename=/dev/nvme0n3 00:34:31.582 [job3] 00:34:31.582 filename=/dev/nvme0n4 00:34:31.582 Could not set queue depth (nvme0n1) 00:34:31.582 Could not set queue depth (nvme0n2) 00:34:31.582 Could not set queue depth (nvme0n3) 00:34:31.582 Could not set queue depth (nvme0n4) 00:34:31.840 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:31.840 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:31.840 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:31.840 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:31.840 fio-3.35 00:34:31.840 Starting 4 threads 00:34:33.224 00:34:33.224 job0: (groupid=0, jobs=1): err= 0: pid=3675618: Wed Nov 20 07:48:51 2024 00:34:33.224 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:34:33.224 slat (nsec): min=976, max=15283k, avg=91025.27, stdev=732698.21 00:34:33.224 clat (usec): min=3320, max=63225, avg=11370.75, stdev=6724.84 00:34:33.224 lat (usec): min=3329, max=63232, avg=11461.77, stdev=6794.27 00:34:33.224 clat percentiles (usec): 00:34:33.224 | 1.00th=[ 4359], 5.00th=[ 5538], 10.00th=[ 6128], 20.00th=[ 7046], 00:34:33.224 | 30.00th=[ 7308], 40.00th=[ 7963], 50.00th=[ 8979], 60.00th=[11731], 00:34:33.224 | 70.00th=[13173], 80.00th=[14746], 90.00th=[18482], 95.00th=[20055], 00:34:33.224 | 99.00th=[44303], 99.50th=[58983], 99.90th=[61604], 99.95th=[63177], 00:34:33.224 | 99.99th=[63177] 00:34:33.224 write: IOPS=5249, BW=20.5MiB/s (21.5MB/s)(20.7MiB/1010msec); 0 zone resets 00:34:33.224 slat (nsec): min=1519, max=9066.9k, avg=94912.27, stdev=594669.54 00:34:33.224 clat (usec): min=1208, max=63129, avg=13179.42, stdev=14084.58 00:34:33.224 lat (usec): min=1218, max=63139, avg=13274.33, stdev=14174.17 00:34:33.224 clat percentiles (usec): 00:34:33.224 | 1.00th=[ 3589], 5.00th=[ 4146], 10.00th=[ 4686], 20.00th=[ 5735], 00:34:33.224 | 30.00th=[ 6063], 40.00th=[ 6390], 50.00th=[ 8094], 60.00th=[ 9110], 00:34:33.224 | 70.00th=[11076], 80.00th=[12649], 90.00th=[37487], 95.00th=[54264], 00:34:33.224 | 99.00th=[58459], 99.50th=[60556], 99.90th=[62653], 99.95th=[62653], 00:34:33.224 | 99.99th=[63177] 00:34:33.225 bw ( KiB/s): min=16416, max=25008, per=23.78%, avg=20712.00, stdev=6075.46, samples=2 00:34:33.225 iops : min= 4104, max= 6252, avg=5178.00, stdev=1518.87, samples=2 00:34:33.225 lat (msec) : 2=0.09%, 4=2.45%, 10=57.23%, 20=30.46%, 50=5.94% 00:34:33.225 lat (msec) : 100=3.84% 00:34:33.225 cpu : usr=3.67%, sys=5.75%, ctx=288, majf=0, minf=2 00:34:33.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:33.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:33.225 issued rwts: total=5120,5302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:33.225 job1: (groupid=0, jobs=1): err= 0: pid=3675634: Wed Nov 20 07:48:51 2024 00:34:33.225 read: IOPS=5938, BW=23.2MiB/s (24.3MB/s)(23.3MiB/1006msec) 00:34:33.225 slat (nsec): min=988, max=10987k, avg=74365.89, stdev=534423.78 00:34:33.225 clat (usec): min=2595, max=31206, avg=9504.35, stdev=3810.37 00:34:33.225 lat (usec): min=2602, max=31210, avg=9578.72, stdev=3854.69 00:34:33.225 clat percentiles (usec): 00:34:33.225 | 1.00th=[ 5080], 5.00th=[ 5866], 10.00th=[ 6325], 20.00th=[ 6456], 00:34:33.225 | 30.00th=[ 7373], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9110], 00:34:33.225 | 70.00th=[10028], 80.00th=[11469], 90.00th=[13435], 95.00th=[16188], 00:34:33.225 | 99.00th=[26608], 99.50th=[27395], 99.90th=[31065], 99.95th=[31327], 00:34:33.225 | 99.99th=[31327] 00:34:33.225 write: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec); 0 zone resets 00:34:33.225 slat (nsec): min=1585, max=9166.2k, avg=82787.18, stdev=463838.31 00:34:33.225 clat (usec): min=1251, max=31191, avg=11525.73, stdev=6825.03 00:34:33.225 lat (usec): min=1263, max=31193, avg=11608.52, stdev=6865.83 00:34:33.225 clat percentiles (usec): 00:34:33.225 | 1.00th=[ 3458], 5.00th=[ 4359], 10.00th=[ 4948], 20.00th=[ 6128], 00:34:33.225 | 30.00th=[ 6259], 40.00th=[ 6915], 50.00th=[ 8586], 60.00th=[11207], 00:34:33.225 | 70.00th=[14484], 80.00th=[19006], 90.00th=[22152], 95.00th=[24511], 00:34:33.225 | 99.00th=[27919], 99.50th=[28181], 99.90th=[30802], 99.95th=[31065], 00:34:33.225 | 99.99th=[31065] 00:34:33.225 bw ( KiB/s): min=21392, max=27760, per=28.22%, avg=24576.00, stdev=4502.86, samples=2 00:34:33.225 iops : min= 5348, max= 6940, avg=6144.00, stdev=1125.71, samples=2 00:34:33.225 lat (msec) : 2=0.12%, 4=1.27%, 10=60.93%, 20=27.57%, 50=10.11% 00:34:33.225 cpu : usr=5.07%, sys=6.37%, ctx=477, majf=0, minf=2 00:34:33.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:33.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:33.225 issued rwts: total=5974,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:33.225 job2: (groupid=0, jobs=1): err= 0: pid=3675651: Wed Nov 20 07:48:51 2024 00:34:33.225 read: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:34:33.225 slat (nsec): min=940, max=13067k, avg=86312.11, stdev=642705.90 00:34:33.225 clat (usec): min=3562, max=52126, avg=10867.06, stdev=5015.52 00:34:33.225 lat (usec): min=3570, max=52128, avg=10953.37, stdev=5072.13 00:34:33.225 clat percentiles (usec): 00:34:33.225 | 1.00th=[ 3884], 5.00th=[ 6194], 10.00th=[ 6652], 20.00th=[ 7046], 00:34:33.225 | 30.00th=[ 7963], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[10814], 00:34:33.225 | 70.00th=[12125], 80.00th=[13960], 90.00th=[16057], 95.00th=[19530], 00:34:33.225 | 99.00th=[27395], 99.50th=[39584], 99.90th=[49546], 99.95th=[52167], 00:34:33.225 | 99.99th=[52167] 00:34:33.225 write: IOPS=5891, BW=23.0MiB/s (24.1MB/s)(23.2MiB/1006msec); 0 zone resets 00:34:33.225 slat (nsec): min=1528, max=11199k, avg=81178.29, stdev=542483.49 00:34:33.225 clat (usec): min=2157, max=52716, avg=11153.42, stdev=9716.55 00:34:33.225 lat (usec): min=2164, max=52726, avg=11234.60, stdev=9780.34 00:34:33.225 clat percentiles (usec): 00:34:33.225 | 1.00th=[ 3261], 5.00th=[ 4621], 10.00th=[ 5014], 20.00th=[ 5997], 00:34:33.225 | 30.00th=[ 6652], 40.00th=[ 7177], 50.00th=[ 8225], 60.00th=[ 8848], 00:34:33.225 | 70.00th=[10028], 80.00th=[12256], 90.00th=[20317], 95.00th=[40109], 00:34:33.225 | 99.00th=[48497], 99.50th=[50070], 99.90th=[52691], 99.95th=[52691], 00:34:33.225 | 99.99th=[52691] 00:34:33.225 bw ( KiB/s): min=17664, max=28785, per=26.66%, avg=23224.50, stdev=7863.73, samples=2 00:34:33.225 iops : min= 4416, max= 7196, avg=5806.00, stdev=1965.76, samples=2 00:34:33.225 lat (msec) : 4=1.57%, 10=60.65%, 20=30.49%, 50=6.94%, 100=0.36% 00:34:33.225 cpu : usr=4.58%, sys=6.37%, ctx=387, majf=0, minf=1 00:34:33.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:33.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:33.225 issued rwts: total=5632,5927,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:33.225 job3: (groupid=0, jobs=1): err= 0: pid=3675657: Wed Nov 20 07:48:51 2024 00:34:33.225 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:34:33.225 slat (nsec): min=979, max=13218k, avg=102927.42, stdev=749252.08 00:34:33.225 clat (usec): min=3751, max=36433, avg=13151.92, stdev=5975.83 00:34:33.225 lat (usec): min=3771, max=36467, avg=13254.85, stdev=6039.46 00:34:33.225 clat percentiles (usec): 00:34:33.225 | 1.00th=[ 5932], 5.00th=[ 7177], 10.00th=[ 7898], 20.00th=[ 8586], 00:34:33.225 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10945], 00:34:33.225 | 70.00th=[16581], 80.00th=[19006], 90.00th=[22414], 95.00th=[25035], 00:34:33.225 | 99.00th=[29492], 99.50th=[30278], 99.90th=[33162], 99.95th=[33162], 00:34:33.225 | 99.99th=[36439] 00:34:33.225 write: IOPS=4592, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:34:33.225 slat (nsec): min=1670, max=10646k, avg=107480.04, stdev=613603.66 00:34:33.225 clat (usec): min=872, max=51493, avg=14386.29, stdev=11156.77 00:34:33.225 lat (usec): min=1087, max=51503, avg=14493.77, stdev=11238.21 00:34:33.225 clat percentiles (usec): 00:34:33.225 | 1.00th=[ 5211], 5.00th=[ 7046], 10.00th=[ 7701], 20.00th=[ 7898], 00:34:33.225 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[11863], 00:34:33.225 | 70.00th=[13042], 80.00th=[17695], 90.00th=[30802], 95.00th=[46400], 00:34:33.225 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51643], 99.95th=[51643], 00:34:33.225 | 99.99th=[51643] 00:34:33.225 bw ( KiB/s): min=16464, max=20432, per=21.18%, avg=18448.00, stdev=2805.80, samples=2 00:34:33.225 iops : min= 4116, max= 5108, avg=4612.00, stdev=701.45, samples=2 00:34:33.225 lat (usec) : 1000=0.01% 00:34:33.225 lat (msec) : 2=0.03%, 4=0.17%, 10=51.87%, 20=32.37%, 50=14.28% 00:34:33.225 lat (msec) : 100=1.26% 00:34:33.225 cpu : usr=4.08%, sys=5.27%, ctx=299, majf=0, minf=1 00:34:33.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:33.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:33.225 issued rwts: total=4608,4620,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:33.225 00:34:33.225 Run status group 0 (all jobs): 00:34:33.225 READ: bw=82.5MiB/s (86.5MB/s), 17.9MiB/s-23.2MiB/s (18.8MB/s-24.3MB/s), io=83.3MiB (87.4MB), run=1006-1010msec 00:34:33.225 WRITE: bw=85.1MiB/s (89.2MB/s), 17.9MiB/s-23.9MiB/s (18.8MB/s-25.0MB/s), io=85.9MiB (90.1MB), run=1006-1010msec 00:34:33.225 00:34:33.225 Disk stats (read/write): 00:34:33.225 nvme0n1: ios=4146/4577, merge=0/0, ticks=43486/58547, in_queue=102033, util=88.38% 00:34:33.225 nvme0n2: ios=5159/5167, merge=0/0, ticks=45220/56567, in_queue=101787, util=87.87% 00:34:33.225 nvme0n3: ios=4608/5055, merge=0/0, ticks=41648/51409, in_queue=93057, util=88.63% 00:34:33.225 nvme0n4: ios=3584/3983, merge=0/0, ticks=22371/28693, in_queue=51064, util=89.57% 00:34:33.225 07:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:33.225 07:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3675714 00:34:33.225 07:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:33.225 07:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:33.225 [global] 00:34:33.225 thread=1 00:34:33.225 invalidate=1 00:34:33.225 rw=read 00:34:33.225 time_based=1 00:34:33.225 runtime=10 00:34:33.225 ioengine=libaio 00:34:33.226 direct=1 00:34:33.226 bs=4096 00:34:33.226 iodepth=1 00:34:33.226 norandommap=1 00:34:33.226 numjobs=1 00:34:33.226 00:34:33.226 [job0] 00:34:33.226 filename=/dev/nvme0n1 00:34:33.226 [job1] 00:34:33.226 filename=/dev/nvme0n2 00:34:33.226 [job2] 00:34:33.226 filename=/dev/nvme0n3 00:34:33.226 [job3] 00:34:33.226 filename=/dev/nvme0n4 00:34:33.226 Could not set queue depth (nvme0n1) 00:34:33.226 Could not set queue depth (nvme0n2) 00:34:33.226 Could not set queue depth (nvme0n3) 00:34:33.226 Could not set queue depth (nvme0n4) 00:34:33.486 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:33.486 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:33.486 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:33.486 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:33.486 fio-3.35 00:34:33.486 Starting 4 threads 00:34:36.027 07:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:36.027 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=5160960, buflen=4096 00:34:36.027 fio: pid=3676132, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:36.288 07:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:36.288 07:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:36.288 07:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:36.288 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=737280, buflen=4096 00:34:36.288 fio: pid=3676127, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:36.548 07:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:36.548 07:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:36.548 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=311296, buflen=4096 00:34:36.548 fio: pid=3676100, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:36.548 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=14663680, buflen=4096 00:34:36.548 fio: pid=3676112, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:36.811 07:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:36.811 07:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:36.811 00:34:36.811 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3676100: Wed Nov 20 07:48:54 2024 00:34:36.811 read: IOPS=25, BW=102KiB/s (104kB/s)(304KiB/2982msec) 00:34:36.811 slat (usec): min=8, max=14567, avg=260.05, stdev=1700.26 00:34:36.811 clat (usec): min=623, max=42114, avg=38627.66, stdev=11140.31 00:34:36.811 lat (usec): min=662, max=55970, avg=38890.78, stdev=11330.49 00:34:36.811 clat percentiles (usec): 00:34:36.811 | 1.00th=[ 627], 5.00th=[ 816], 10.00th=[41157], 20.00th=[41681], 00:34:36.811 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:34:36.811 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:36.811 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:36.811 | 99.99th=[42206] 00:34:36.811 bw ( KiB/s): min= 96, max= 112, per=1.57%, avg=102.40, stdev= 6.69, samples=5 00:34:36.811 iops : min= 24, max= 28, avg=25.60, stdev= 1.67, samples=5 00:34:36.811 lat (usec) : 750=1.30%, 1000=5.19% 00:34:36.811 lat (msec) : 2=1.30%, 50=90.91% 00:34:36.811 cpu : usr=0.00%, sys=0.10%, ctx=79, majf=0, minf=1 00:34:36.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.811 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.811 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:36.811 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3676112: Wed Nov 20 07:48:54 2024 00:34:36.811 read: IOPS=1141, BW=4566KiB/s (4676kB/s)(14.0MiB/3136msec) 00:34:36.811 slat (usec): min=6, max=31521, avg=41.44, stdev=627.92 00:34:36.811 clat (usec): min=274, max=41988, avg=824.48, stdev=1361.49 00:34:36.811 lat (usec): min=281, max=54961, avg=865.93, stdev=1599.34 00:34:36.811 clat percentiles (usec): 00:34:36.811 | 1.00th=[ 562], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 725], 00:34:36.811 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[ 783], 60.00th=[ 799], 00:34:36.811 | 70.00th=[ 807], 80.00th=[ 824], 90.00th=[ 848], 95.00th=[ 889], 00:34:36.811 | 99.00th=[ 1020], 99.50th=[ 1057], 99.90th=[41157], 99.95th=[41681], 00:34:36.811 | 99.99th=[42206] 00:34:36.811 bw ( KiB/s): min= 2891, max= 5056, per=71.55%, avg=4651.17, stdev=863.10, samples=6 00:34:36.811 iops : min= 722, max= 1264, avg=1162.67, stdev=216.08, samples=6 00:34:36.811 lat (usec) : 500=0.50%, 750=24.91%, 1000=73.36% 00:34:36.811 lat (msec) : 2=1.09%, 50=0.11% 00:34:36.811 cpu : usr=1.28%, sys=2.90%, ctx=3585, majf=0, minf=2 00:34:36.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.811 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.811 issued rwts: total=3581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:36.811 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3676127: Wed Nov 20 07:48:54 2024 00:34:36.811 read: IOPS=63, BW=254KiB/s (260kB/s)(720KiB/2831msec) 00:34:36.811 slat (usec): min=6, max=10517, avg=80.97, stdev=780.05 00:34:36.811 clat (usec): min=592, max=42104, avg=15514.68, stdev=19681.07 00:34:36.811 lat (usec): min=599, max=52012, avg=15595.96, stdev=19775.48 00:34:36.811 clat percentiles (usec): 00:34:36.811 | 1.00th=[ 660], 5.00th=[ 758], 10.00th=[ 799], 20.00th=[ 881], 00:34:36.811 | 30.00th=[ 930], 40.00th=[ 971], 50.00th=[ 1004], 60.00th=[ 1057], 00:34:36.811 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:36.811 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:36.811 | 99.99th=[42206] 00:34:36.811 bw ( KiB/s): min= 88, max= 688, per=4.23%, avg=275.20, stdev=268.01, samples=5 00:34:36.811 iops : min= 22, max= 172, avg=68.80, stdev=67.00, samples=5 00:34:36.811 lat (usec) : 750=3.87%, 1000=43.65% 00:34:36.811 lat (msec) : 2=16.57%, 50=35.36% 00:34:36.811 cpu : usr=0.00%, sys=0.28%, ctx=182, majf=0, minf=2 00:34:36.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.811 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.811 issued rwts: total=181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:36.811 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3676132: Wed Nov 20 07:48:54 2024 00:34:36.811 read: IOPS=482, BW=1930KiB/s (1977kB/s)(5040KiB/2611msec) 00:34:36.811 slat (nsec): min=6841, max=61659, avg=27082.76, stdev=4657.99 00:34:36.811 clat (usec): min=339, max=42042, avg=2017.00, stdev=6539.11 00:34:36.811 lat (usec): min=367, max=42069, avg=2044.09, stdev=6539.02 00:34:36.811 clat percentiles (usec): 00:34:36.811 | 1.00th=[ 635], 5.00th=[ 766], 10.00th=[ 816], 20.00th=[ 873], 00:34:36.811 | 30.00th=[ 914], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 979], 00:34:36.811 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1123], 00:34:36.811 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:36.811 | 99.99th=[42206] 00:34:36.811 bw ( KiB/s): min= 96, max= 4120, per=29.02%, avg=1886.40, stdev=2048.96, samples=5 00:34:36.811 iops : min= 24, max= 1030, avg=471.60, stdev=512.24, samples=5 00:34:36.811 lat (usec) : 500=0.24%, 750=3.97%, 1000=67.49% 00:34:36.811 lat (msec) : 2=25.61%, 50=2.62% 00:34:36.811 cpu : usr=1.34%, sys=1.42%, ctx=1264, majf=0, minf=2 00:34:36.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.811 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.811 issued rwts: total=1261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:36.811 00:34:36.811 Run status group 0 (all jobs): 00:34:36.811 READ: bw=6500KiB/s (6656kB/s), 102KiB/s-4566KiB/s (104kB/s-4676kB/s), io=19.9MiB (20.9MB), run=2611-3136msec 00:34:36.811 00:34:36.811 Disk stats (read/write): 00:34:36.811 nvme0n1: ios=72/0, merge=0/0, ticks=2810/0, in_queue=2810, util=94.36% 00:34:36.811 nvme0n2: ios=3552/0, merge=0/0, ticks=2860/0, in_queue=2860, util=93.81% 00:34:36.811 nvme0n3: ios=175/0, merge=0/0, ticks=2586/0, in_queue=2586, util=96.04% 00:34:36.811 nvme0n4: ios=1299/0, merge=0/0, ticks=3380/0, in_queue=3380, util=98.81% 00:34:36.811 07:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:36.811 07:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:37.073 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:37.073 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:37.334 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:37.334 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:37.334 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:37.334 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:37.595 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:37.595 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3675714 00:34:37.595 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:37.595 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:37.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:37.595 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:37.595 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:34:37.595 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:37.595 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:37.855 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:37.855 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:37.855 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:34:37.855 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:37.855 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:37.855 nvmf hotplug test: fio failed as expected 00:34:37.855 07:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:37.855 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:37.855 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:37.855 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:37.855 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:37.855 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:37.855 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:37.855 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:37.855 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:37.855 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:37.855 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:37.855 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:37.855 rmmod nvme_tcp 00:34:37.855 rmmod nvme_fabrics 00:34:38.116 rmmod nvme_keyring 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3672521 ']' 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3672521 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3672521 ']' 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3672521 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3672521 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3672521' 00:34:38.116 killing process with pid 3672521 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3672521 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3672521 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:38.116 07:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:40.662 00:34:40.662 real 0m28.093s 00:34:40.662 user 2m15.732s 00:34:40.662 sys 0m12.057s 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:40.662 ************************************ 00:34:40.662 END TEST nvmf_fio_target 00:34:40.662 ************************************ 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:40.662 ************************************ 00:34:40.662 START TEST nvmf_bdevio 00:34:40.662 ************************************ 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:40.662 * Looking for test storage... 00:34:40.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:40.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.662 --rc genhtml_branch_coverage=1 00:34:40.662 --rc genhtml_function_coverage=1 00:34:40.662 --rc genhtml_legend=1 00:34:40.662 --rc geninfo_all_blocks=1 00:34:40.662 --rc geninfo_unexecuted_blocks=1 00:34:40.662 00:34:40.662 ' 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:40.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.662 --rc genhtml_branch_coverage=1 00:34:40.662 --rc genhtml_function_coverage=1 00:34:40.662 --rc genhtml_legend=1 00:34:40.662 --rc geninfo_all_blocks=1 00:34:40.662 --rc geninfo_unexecuted_blocks=1 00:34:40.662 00:34:40.662 ' 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:40.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.662 --rc genhtml_branch_coverage=1 00:34:40.662 --rc genhtml_function_coverage=1 00:34:40.662 --rc genhtml_legend=1 00:34:40.662 --rc geninfo_all_blocks=1 00:34:40.662 --rc geninfo_unexecuted_blocks=1 00:34:40.662 00:34:40.662 ' 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:40.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.662 --rc genhtml_branch_coverage=1 00:34:40.662 --rc genhtml_function_coverage=1 00:34:40.662 --rc genhtml_legend=1 00:34:40.662 --rc geninfo_all_blocks=1 00:34:40.662 --rc geninfo_unexecuted_blocks=1 00:34:40.662 00:34:40.662 ' 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:40.662 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:40.663 07:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:48.801 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:48.801 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:48.801 Found net devices under 0000:31:00.0: cvl_0_0 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:48.801 Found net devices under 0000:31:00.1: cvl_0_1 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:48.801 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:48.802 07:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:48.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:48.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:34:48.802 00:34:48.802 --- 10.0.0.2 ping statistics --- 00:34:48.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.802 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:48.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:48.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:34:48.802 00:34:48.802 --- 10.0.0.1 ping statistics --- 00:34:48.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.802 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3681224 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3681224 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3681224 ']' 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:48.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:48.802 07:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:48.802 [2024-11-20 07:49:06.393741] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:48.802 [2024-11-20 07:49:06.394927] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:34:48.802 [2024-11-20 07:49:06.394979] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:48.802 [2024-11-20 07:49:06.495473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:48.802 [2024-11-20 07:49:06.546475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:48.802 [2024-11-20 07:49:06.546523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:48.802 [2024-11-20 07:49:06.546532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:48.802 [2024-11-20 07:49:06.546539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:48.802 [2024-11-20 07:49:06.546545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:48.802 [2024-11-20 07:49:06.548576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:48.802 [2024-11-20 07:49:06.548739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:48.802 [2024-11-20 07:49:06.548901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:48.802 [2024-11-20 07:49:06.549000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:48.802 [2024-11-20 07:49:06.634354] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:48.802 [2024-11-20 07:49:06.635327] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:48.802 [2024-11-20 07:49:06.635690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:48.802 [2024-11-20 07:49:06.636137] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:48.802 [2024-11-20 07:49:06.636196] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:49.064 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:49.064 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:34:49.064 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:49.064 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:49.064 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:49.064 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:49.064 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:49.064 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.064 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:49.064 [2024-11-20 07:49:07.261900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:49.325 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.325 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:49.325 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.325 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:49.325 Malloc0 00:34:49.325 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:49.326 [2024-11-20 07:49:07.354055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:49.326 { 00:34:49.326 "params": { 00:34:49.326 "name": "Nvme$subsystem", 00:34:49.326 "trtype": "$TEST_TRANSPORT", 00:34:49.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:49.326 "adrfam": "ipv4", 00:34:49.326 "trsvcid": "$NVMF_PORT", 00:34:49.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:49.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:49.326 "hdgst": ${hdgst:-false}, 00:34:49.326 "ddgst": ${ddgst:-false} 00:34:49.326 }, 00:34:49.326 "method": "bdev_nvme_attach_controller" 00:34:49.326 } 00:34:49.326 EOF 00:34:49.326 )") 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:49.326 07:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:49.326 "params": { 00:34:49.326 "name": "Nvme1", 00:34:49.326 "trtype": "tcp", 00:34:49.326 "traddr": "10.0.0.2", 00:34:49.326 "adrfam": "ipv4", 00:34:49.326 "trsvcid": "4420", 00:34:49.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:49.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:49.326 "hdgst": false, 00:34:49.326 "ddgst": false 00:34:49.326 }, 00:34:49.326 "method": "bdev_nvme_attach_controller" 00:34:49.326 }' 00:34:49.326 [2024-11-20 07:49:07.412532] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:34:49.326 [2024-11-20 07:49:07.412602] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3681287 ] 00:34:49.326 [2024-11-20 07:49:07.509650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:49.586 [2024-11-20 07:49:07.566922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:49.587 [2024-11-20 07:49:07.567086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:49.587 [2024-11-20 07:49:07.567087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:49.587 I/O targets: 00:34:49.587 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:49.587 00:34:49.587 00:34:49.587 CUnit - A unit testing framework for C - Version 2.1-3 00:34:49.587 http://cunit.sourceforge.net/ 00:34:49.587 00:34:49.587 00:34:49.587 Suite: bdevio tests on: Nvme1n1 00:34:49.587 Test: blockdev write read block ...passed 00:34:49.847 Test: blockdev write zeroes read block ...passed 00:34:49.847 Test: blockdev write zeroes read no split ...passed 00:34:49.847 Test: blockdev write zeroes read split ...passed 00:34:49.847 Test: blockdev write zeroes read split partial ...passed 00:34:49.847 Test: blockdev reset ...[2024-11-20 07:49:07.861008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:49.847 [2024-11-20 07:49:07.861100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21031c0 (9): Bad file descriptor 00:34:49.847 [2024-11-20 07:49:07.956592] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:49.847 passed 00:34:49.847 Test: blockdev write read 8 blocks ...passed 00:34:49.847 Test: blockdev write read size > 128k ...passed 00:34:49.847 Test: blockdev write read invalid size ...passed 00:34:49.847 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:49.847 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:49.847 Test: blockdev write read max offset ...passed 00:34:50.108 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:50.108 Test: blockdev writev readv 8 blocks ...passed 00:34:50.108 Test: blockdev writev readv 30 x 1block ...passed 00:34:50.108 Test: blockdev writev readv block ...passed 00:34:50.108 Test: blockdev writev readv size > 128k ...passed 00:34:50.108 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:50.108 Test: blockdev comparev and writev ...[2024-11-20 07:49:08.183970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:50.108 [2024-11-20 07:49:08.184019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.108 [2024-11-20 07:49:08.184035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:50.108 [2024-11-20 07:49:08.184044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:50.108 [2024-11-20 07:49:08.184710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:50.108 [2024-11-20 07:49:08.184724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:50.108 [2024-11-20 07:49:08.184738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:50.108 [2024-11-20 07:49:08.184753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:50.108 [2024-11-20 07:49:08.185358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:50.108 [2024-11-20 07:49:08.185377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:50.108 [2024-11-20 07:49:08.185391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:50.108 [2024-11-20 07:49:08.185399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:50.108 [2024-11-20 07:49:08.186013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:50.108 [2024-11-20 07:49:08.186024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:50.108 [2024-11-20 07:49:08.186038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:50.108 [2024-11-20 07:49:08.186046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:50.108 passed 00:34:50.108 Test: blockdev nvme passthru rw ...passed 00:34:50.108 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:49:08.269689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:50.108 [2024-11-20 07:49:08.269708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:50.108 [2024-11-20 07:49:08.270116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:50.108 [2024-11-20 07:49:08.270127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:50.108 [2024-11-20 07:49:08.270510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:50.108 [2024-11-20 07:49:08.270521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:50.108 [2024-11-20 07:49:08.270896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:50.108 [2024-11-20 07:49:08.270907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:50.108 passed 00:34:50.108 Test: blockdev nvme admin passthru ...passed 00:34:50.370 Test: blockdev copy ...passed 00:34:50.370 00:34:50.370 Run Summary: Type Total Ran Passed Failed Inactive 00:34:50.370 suites 1 1 n/a 0 0 00:34:50.370 tests 23 23 23 0 0 00:34:50.370 asserts 152 152 152 0 n/a 00:34:50.370 00:34:50.370 Elapsed time = 1.194 seconds 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:50.370 rmmod nvme_tcp 00:34:50.370 rmmod nvme_fabrics 00:34:50.370 rmmod nvme_keyring 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3681224 ']' 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3681224 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3681224 ']' 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3681224 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:50.370 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3681224 00:34:50.631 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:34:50.631 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:34:50.631 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3681224' 00:34:50.631 killing process with pid 3681224 00:34:50.631 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3681224 00:34:50.631 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3681224 00:34:50.631 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:50.631 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:50.631 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:50.631 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:50.631 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:50.631 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:50.631 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:50.631 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:50.631 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:50.631 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.631 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:50.631 07:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.177 07:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:53.177 00:34:53.177 real 0m12.442s 00:34:53.177 user 0m9.553s 00:34:53.177 sys 0m6.594s 00:34:53.177 07:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:53.177 07:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:53.177 ************************************ 00:34:53.177 END TEST nvmf_bdevio 00:34:53.177 ************************************ 00:34:53.177 07:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:53.177 00:34:53.177 real 5m2.651s 00:34:53.177 user 10m16.812s 00:34:53.177 sys 2m7.199s 00:34:53.177 07:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:53.177 07:49:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:53.177 ************************************ 00:34:53.177 END TEST nvmf_target_core_interrupt_mode 00:34:53.177 ************************************ 00:34:53.177 07:49:10 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:53.177 07:49:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:53.177 07:49:10 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:53.177 07:49:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.177 ************************************ 00:34:53.177 START TEST nvmf_interrupt 00:34:53.177 ************************************ 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:53.177 * Looking for test storage... 00:34:53.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:53.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.177 --rc genhtml_branch_coverage=1 00:34:53.177 --rc genhtml_function_coverage=1 00:34:53.177 --rc genhtml_legend=1 00:34:53.177 --rc geninfo_all_blocks=1 00:34:53.177 --rc geninfo_unexecuted_blocks=1 00:34:53.177 00:34:53.177 ' 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:53.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.177 --rc genhtml_branch_coverage=1 00:34:53.177 --rc genhtml_function_coverage=1 00:34:53.177 --rc genhtml_legend=1 00:34:53.177 --rc geninfo_all_blocks=1 00:34:53.177 --rc geninfo_unexecuted_blocks=1 00:34:53.177 00:34:53.177 ' 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:53.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.177 --rc genhtml_branch_coverage=1 00:34:53.177 --rc genhtml_function_coverage=1 00:34:53.177 --rc genhtml_legend=1 00:34:53.177 --rc geninfo_all_blocks=1 00:34:53.177 --rc geninfo_unexecuted_blocks=1 00:34:53.177 00:34:53.177 ' 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:53.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.177 --rc genhtml_branch_coverage=1 00:34:53.177 --rc genhtml_function_coverage=1 00:34:53.177 --rc genhtml_legend=1 00:34:53.177 --rc geninfo_all_blocks=1 00:34:53.177 --rc geninfo_unexecuted_blocks=1 00:34:53.177 00:34:53.177 ' 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.177 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:53.178 07:49:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:01.356 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:01.356 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:01.356 Found net devices under 0000:31:00.0: cvl_0_0 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:01.356 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:01.357 Found net devices under 0000:31:00.1: cvl_0_1 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:01.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:35:01.357 00:35:01.357 --- 10.0.0.2 ping statistics --- 00:35:01.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.357 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:01.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:35:01.357 00:35:01.357 --- 10.0.0.1 ping statistics --- 00:35:01.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.357 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3685788 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3685788 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 3685788 ']' 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:01.357 07:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.357 [2024-11-20 07:49:18.970029] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:01.357 [2024-11-20 07:49:18.971200] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:35:01.357 [2024-11-20 07:49:18.971251] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:01.357 [2024-11-20 07:49:19.072323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:01.357 [2024-11-20 07:49:19.124326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:01.357 [2024-11-20 07:49:19.124378] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:01.357 [2024-11-20 07:49:19.124387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:01.357 [2024-11-20 07:49:19.124394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:01.357 [2024-11-20 07:49:19.124400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:01.357 [2024-11-20 07:49:19.126117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.357 [2024-11-20 07:49:19.126122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.357 [2024-11-20 07:49:19.204160] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:01.357 [2024-11-20 07:49:19.204769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:01.357 [2024-11-20 07:49:19.205082] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:01.618 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:01.618 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:35:01.618 07:49:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:01.618 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:01.618 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.618 07:49:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:01.618 07:49:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:01.964 07:49:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:01.964 07:49:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:01.964 07:49:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:01.964 5000+0 records in 00:35:01.964 5000+0 records out 00:35:01.964 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0190188 s, 538 MB/s 00:35:01.964 07:49:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:01.964 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.964 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.964 AIO0 00:35:01.964 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.964 07:49:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:01.964 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.964 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.964 [2024-11-20 07:49:19.891192] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.965 [2024-11-20 07:49:19.935740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3685788 0 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3685788 0 idle 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3685788 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3685788 -w 256 00:35:01.965 07:49:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3685788 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.32 reactor_0' 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3685788 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.32 reactor_0 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3685788 1 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3685788 1 idle 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3685788 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3685788 -w 256 00:35:01.965 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3685834 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3685834 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3686043 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3685788 0 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3685788 0 busy 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3685788 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3685788 -w 256 00:35:02.288 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:02.549 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3685788 root 20 0 128.2g 44928 32256 R 50.0 0.0 0:00.40 reactor_0' 00:35:02.549 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3685788 root 20 0 128.2g 44928 32256 R 50.0 0.0 0:00.40 reactor_0 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=50.0 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=50 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3685788 1 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3685788 1 busy 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3685788 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3685788 -w 256 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3685834 root 20 0 128.2g 44928 32256 R 87.5 0.0 0:00.22 reactor_1' 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3685834 root 20 0 128.2g 44928 32256 R 87.5 0.0 0:00.22 reactor_1 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=87.5 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=87 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:02.550 07:49:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3686043 00:35:12.552 Initializing NVMe Controllers 00:35:12.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:12.552 Controller IO queue size 256, less than required. 00:35:12.552 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:12.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:12.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:12.552 Initialization complete. Launching workers. 00:35:12.552 ======================================================== 00:35:12.552 Latency(us) 00:35:12.552 Device Information : IOPS MiB/s Average min max 00:35:12.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19459.40 76.01 13159.64 4082.42 33559.89 00:35:12.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19673.00 76.85 13014.70 8190.35 28196.52 00:35:12.552 ======================================================== 00:35:12.552 Total : 39132.40 152.86 13086.77 4082.42 33559.89 00:35:12.552 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3685788 0 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3685788 0 idle 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3685788 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3685788 -w 256 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3685788 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.06 reactor_0' 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3685788 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.06 reactor_0 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3685788 1 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3685788 1 idle 00:35:12.552 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3685788 00:35:12.553 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:12.553 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:12.553 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:12.553 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:12.553 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:12.553 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:12.553 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:12.553 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:12.553 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:12.553 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3685788 -w 256 00:35:12.553 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:12.814 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3685834 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.75 reactor_1' 00:35:12.814 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3685834 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.75 reactor_1 00:35:12.814 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:12.814 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:12.814 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:12.814 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:12.814 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:12.814 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:12.814 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:12.814 07:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:12.814 07:49:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:13.757 07:49:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:13.757 07:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:35:13.758 07:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:35:13.758 07:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:35:13.758 07:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3685788 0 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3685788 0 idle 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3685788 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3685788 -w 256 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3685788 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.43 reactor_0' 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3685788 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.43 reactor_0 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3685788 1 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3685788 1 idle 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3685788 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3685788 -w 256 00:35:15.671 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:15.933 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3685834 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:09.90 reactor_1' 00:35:15.933 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3685834 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:09.90 reactor_1 00:35:15.933 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:15.933 07:49:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:15.933 07:49:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:15.933 07:49:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:15.933 07:49:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:15.933 07:49:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:15.933 07:49:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:15.933 07:49:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:15.933 07:49:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:16.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:16.194 rmmod nvme_tcp 00:35:16.194 rmmod nvme_fabrics 00:35:16.194 rmmod nvme_keyring 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3685788 ']' 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3685788 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 3685788 ']' 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 3685788 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3685788 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3685788' 00:35:16.194 killing process with pid 3685788 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 3685788 00:35:16.194 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 3685788 00:35:16.455 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:16.455 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:16.455 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:16.455 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:16.455 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:16.455 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:16.455 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:16.455 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:16.455 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:16.455 07:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:16.455 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:16.455 07:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:18.369 07:49:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:18.630 00:35:18.630 real 0m25.567s 00:35:18.630 user 0m40.062s 00:35:18.630 sys 0m9.886s 00:35:18.630 07:49:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:18.630 07:49:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:18.630 ************************************ 00:35:18.630 END TEST nvmf_interrupt 00:35:18.630 ************************************ 00:35:18.630 00:35:18.630 real 30m17.078s 00:35:18.630 user 61m27.825s 00:35:18.630 sys 10m25.276s 00:35:18.630 07:49:36 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:18.630 07:49:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:18.630 ************************************ 00:35:18.630 END TEST nvmf_tcp 00:35:18.630 ************************************ 00:35:18.630 07:49:36 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:35:18.630 07:49:36 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:18.630 07:49:36 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:18.630 07:49:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:18.630 07:49:36 -- common/autotest_common.sh@10 -- # set +x 00:35:18.630 ************************************ 00:35:18.630 START TEST spdkcli_nvmf_tcp 00:35:18.630 ************************************ 00:35:18.631 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:18.631 * Looking for test storage... 00:35:18.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:18.631 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:18.631 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:35:18.631 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:18.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.892 --rc genhtml_branch_coverage=1 00:35:18.892 --rc genhtml_function_coverage=1 00:35:18.892 --rc genhtml_legend=1 00:35:18.892 --rc geninfo_all_blocks=1 00:35:18.892 --rc geninfo_unexecuted_blocks=1 00:35:18.892 00:35:18.892 ' 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:18.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.892 --rc genhtml_branch_coverage=1 00:35:18.892 --rc genhtml_function_coverage=1 00:35:18.892 --rc genhtml_legend=1 00:35:18.892 --rc geninfo_all_blocks=1 00:35:18.892 --rc geninfo_unexecuted_blocks=1 00:35:18.892 00:35:18.892 ' 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:18.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.892 --rc genhtml_branch_coverage=1 00:35:18.892 --rc genhtml_function_coverage=1 00:35:18.892 --rc genhtml_legend=1 00:35:18.892 --rc geninfo_all_blocks=1 00:35:18.892 --rc geninfo_unexecuted_blocks=1 00:35:18.892 00:35:18.892 ' 00:35:18.892 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:18.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.893 --rc genhtml_branch_coverage=1 00:35:18.893 --rc genhtml_function_coverage=1 00:35:18.893 --rc genhtml_legend=1 00:35:18.893 --rc geninfo_all_blocks=1 00:35:18.893 --rc geninfo_unexecuted_blocks=1 00:35:18.893 00:35:18.893 ' 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:18.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3689249 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3689249 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 3689249 ']' 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:18.893 07:49:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:18.893 [2024-11-20 07:49:36.997983] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:35:18.893 [2024-11-20 07:49:36.998059] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3689249 ] 00:35:18.893 [2024-11-20 07:49:37.093614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:19.154 [2024-11-20 07:49:37.148633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.154 [2024-11-20 07:49:37.148637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.726 07:49:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:19.726 07:49:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:35:19.726 07:49:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:19.726 07:49:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:19.726 07:49:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:19.726 07:49:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:19.726 07:49:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:19.726 07:49:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:19.726 07:49:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:19.726 07:49:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:19.726 07:49:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:19.726 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:19.726 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:19.726 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:19.726 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:19.726 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:19.726 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:19.726 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:19.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:19.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:19.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:19.726 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:19.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:19.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:19.726 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:19.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:19.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:19.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:19.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:19.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:19.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:19.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:19.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:19.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:19.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:19.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:19.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:19.726 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:19.726 ' 00:35:23.023 [2024-11-20 07:49:40.555343] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:23.964 [2024-11-20 07:49:41.919651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:26.509 [2024-11-20 07:49:44.438660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:29.052 [2024-11-20 07:49:46.656969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:30.437 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:30.437 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:30.437 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:30.437 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:30.437 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:30.437 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:30.437 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:30.437 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:30.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:30.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:30.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:30.437 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:30.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:30.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:30.437 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:30.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:30.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:30.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:30.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:30.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:30.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:30.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:30.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:30.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:30.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:30.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:30.437 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:30.437 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:30.437 07:49:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:30.437 07:49:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:30.437 07:49:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:30.437 07:49:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:30.437 07:49:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:30.437 07:49:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:30.437 07:49:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:30.437 07:49:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:30.697 07:49:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:30.957 07:49:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:30.957 07:49:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:30.957 07:49:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:30.957 07:49:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:30.957 07:49:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:30.957 07:49:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:30.957 07:49:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:30.957 07:49:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:30.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:30.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:30.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:30.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:30.957 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:30.957 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:30.957 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:30.957 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:30.957 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:30.957 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:30.957 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:30.957 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:30.957 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:30.957 ' 00:35:37.548 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:37.548 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:37.548 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:37.548 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:37.548 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:37.548 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:37.548 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:37.548 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:37.548 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:37.548 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:37.548 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:37.548 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:37.548 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:37.548 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:37.548 07:49:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:37.548 07:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:37.548 07:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:37.548 07:49:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3689249 00:35:37.548 07:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3689249 ']' 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3689249 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3689249 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3689249' 00:35:37.549 killing process with pid 3689249 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 3689249 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 3689249 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3689249 ']' 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3689249 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3689249 ']' 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3689249 00:35:37.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3689249) - No such process 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 3689249 is not found' 00:35:37.549 Process with pid 3689249 is not found 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:37.549 00:35:37.549 real 0m18.155s 00:35:37.549 user 0m40.303s 00:35:37.549 sys 0m0.894s 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:37.549 07:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:37.549 ************************************ 00:35:37.549 END TEST spdkcli_nvmf_tcp 00:35:37.549 ************************************ 00:35:37.549 07:49:54 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:37.549 07:49:54 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:37.549 07:49:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:37.549 07:49:54 -- common/autotest_common.sh@10 -- # set +x 00:35:37.549 ************************************ 00:35:37.549 START TEST nvmf_identify_passthru 00:35:37.549 ************************************ 00:35:37.549 07:49:54 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:37.549 * Looking for test storage... 00:35:37.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:37.549 07:49:55 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:37.549 07:49:55 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:35:37.549 07:49:55 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:37.549 07:49:55 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:37.549 07:49:55 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:37.549 07:49:55 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:37.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.549 --rc genhtml_branch_coverage=1 00:35:37.549 --rc genhtml_function_coverage=1 00:35:37.549 --rc genhtml_legend=1 00:35:37.549 --rc geninfo_all_blocks=1 00:35:37.549 --rc geninfo_unexecuted_blocks=1 00:35:37.549 00:35:37.549 ' 00:35:37.549 07:49:55 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:37.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.549 --rc genhtml_branch_coverage=1 00:35:37.549 --rc genhtml_function_coverage=1 00:35:37.549 --rc genhtml_legend=1 00:35:37.549 --rc geninfo_all_blocks=1 00:35:37.549 --rc geninfo_unexecuted_blocks=1 00:35:37.549 00:35:37.549 ' 00:35:37.549 07:49:55 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:37.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.549 --rc genhtml_branch_coverage=1 00:35:37.549 --rc genhtml_function_coverage=1 00:35:37.549 --rc genhtml_legend=1 00:35:37.549 --rc geninfo_all_blocks=1 00:35:37.549 --rc geninfo_unexecuted_blocks=1 00:35:37.549 00:35:37.549 ' 00:35:37.549 07:49:55 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:37.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.549 --rc genhtml_branch_coverage=1 00:35:37.549 --rc genhtml_function_coverage=1 00:35:37.549 --rc genhtml_legend=1 00:35:37.549 --rc geninfo_all_blocks=1 00:35:37.549 --rc geninfo_unexecuted_blocks=1 00:35:37.549 00:35:37.549 ' 00:35:37.549 07:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:37.549 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:37.549 07:49:55 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:37.549 07:49:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.549 07:49:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.549 07:49:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.550 07:49:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:37.550 07:49:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:37.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:37.550 07:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:37.550 07:49:55 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:37.550 07:49:55 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:37.550 07:49:55 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:37.550 07:49:55 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:37.550 07:49:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.550 07:49:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.550 07:49:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.550 07:49:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:37.550 07:49:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.550 07:49:55 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.550 07:49:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:37.550 07:49:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:37.550 07:49:55 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:37.550 07:49:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:45.691 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:45.691 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:45.691 Found net devices under 0000:31:00.0: cvl_0_0 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:45.691 Found net devices under 0000:31:00.1: cvl_0_1 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:45.691 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:45.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:45.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:35:45.691 00:35:45.691 --- 10.0.0.2 ping statistics --- 00:35:45.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.692 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:35:45.692 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:45.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:45.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:35:45.692 00:35:45.692 --- 10.0.0.1 ping statistics --- 00:35:45.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.692 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:35:45.692 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:45.692 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:45.692 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:45.692 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:45.692 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:45.692 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:45.692 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:45.692 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:45.692 07:50:02 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:45.692 07:50:02 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:45.692 07:50:02 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:45.692 07:50:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.692 07:50:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:45.692 07:50:02 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:35:45.692 07:50:02 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:35:45.692 07:50:02 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:35:45.692 07:50:02 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:35:45.692 07:50:02 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:35:45.692 07:50:02 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:35:45.692 07:50:02 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:45.692 07:50:02 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:45.692 07:50:02 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:35:45.692 07:50:02 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:35:45.692 07:50:02 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:35:45.692 07:50:02 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:35:45.692 07:50:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:45.692 07:50:02 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:45.692 07:50:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:45.692 07:50:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:45.692 07:50:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:45.692 07:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605500 00:35:45.692 07:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:45.692 07:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:45.692 07:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:45.692 07:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:45.692 07:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:45.692 07:50:03 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:45.692 07:50:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.692 07:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:45.692 07:50:03 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:45.692 07:50:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.953 07:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3696686 00:35:45.953 07:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:45.953 07:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:45.953 07:50:03 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3696686 00:35:45.953 07:50:03 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 3696686 ']' 00:35:45.953 07:50:03 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.953 07:50:03 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:45.953 07:50:03 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.953 07:50:03 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:45.953 07:50:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.953 [2024-11-20 07:50:03.967661] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:35:45.953 [2024-11-20 07:50:03.967727] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.953 [2024-11-20 07:50:04.068770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:45.953 [2024-11-20 07:50:04.122297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.953 [2024-11-20 07:50:04.122346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.953 [2024-11-20 07:50:04.122356] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:45.953 [2024-11-20 07:50:04.122363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:45.953 [2024-11-20 07:50:04.122369] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.953 [2024-11-20 07:50:04.124444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.953 [2024-11-20 07:50:04.124607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:45.953 [2024-11-20 07:50:04.124785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:45.953 [2024-11-20 07:50:04.124786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.897 07:50:04 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:46.897 07:50:04 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:35:46.897 07:50:04 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:46.897 07:50:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.897 07:50:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.897 INFO: Log level set to 20 00:35:46.897 INFO: Requests: 00:35:46.897 { 00:35:46.897 "jsonrpc": "2.0", 00:35:46.897 "method": "nvmf_set_config", 00:35:46.897 "id": 1, 00:35:46.897 "params": { 00:35:46.897 "admin_cmd_passthru": { 00:35:46.897 "identify_ctrlr": true 00:35:46.897 } 00:35:46.897 } 00:35:46.897 } 00:35:46.897 00:35:46.897 INFO: response: 00:35:46.897 { 00:35:46.897 "jsonrpc": "2.0", 00:35:46.897 "id": 1, 00:35:46.897 "result": true 00:35:46.897 } 00:35:46.897 00:35:46.897 07:50:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.897 07:50:04 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:46.897 07:50:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.897 07:50:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.897 INFO: Setting log level to 20 00:35:46.897 INFO: Setting log level to 20 00:35:46.897 INFO: Log level set to 20 00:35:46.897 INFO: Log level set to 20 00:35:46.897 INFO: Requests: 00:35:46.897 { 00:35:46.897 "jsonrpc": "2.0", 00:35:46.897 "method": "framework_start_init", 00:35:46.897 "id": 1 00:35:46.897 } 00:35:46.897 00:35:46.897 INFO: Requests: 00:35:46.897 { 00:35:46.897 "jsonrpc": "2.0", 00:35:46.897 "method": "framework_start_init", 00:35:46.897 "id": 1 00:35:46.897 } 00:35:46.897 00:35:46.897 [2024-11-20 07:50:04.842704] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:46.897 INFO: response: 00:35:46.897 { 00:35:46.897 "jsonrpc": "2.0", 00:35:46.897 "id": 1, 00:35:46.897 "result": true 00:35:46.897 } 00:35:46.897 00:35:46.897 INFO: response: 00:35:46.897 { 00:35:46.897 "jsonrpc": "2.0", 00:35:46.897 "id": 1, 00:35:46.897 "result": true 00:35:46.897 } 00:35:46.897 00:35:46.897 07:50:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.897 07:50:04 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:46.897 07:50:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.897 07:50:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.897 INFO: Setting log level to 40 00:35:46.897 INFO: Setting log level to 40 00:35:46.897 INFO: Setting log level to 40 00:35:46.897 [2024-11-20 07:50:04.856053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:46.897 07:50:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.897 07:50:04 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:46.897 07:50:04 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:46.897 07:50:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.897 07:50:04 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:46.897 07:50:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.897 07:50:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:47.159 Nvme0n1 00:35:47.159 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.159 07:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:47.159 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.159 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:47.159 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.159 07:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:47.159 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.159 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:47.159 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.159 07:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:47.159 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.159 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:47.159 [2024-11-20 07:50:05.250813] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.159 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.159 07:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:47.159 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.159 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:47.159 [ 00:35:47.159 { 00:35:47.159 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:47.159 "subtype": "Discovery", 00:35:47.159 "listen_addresses": [], 00:35:47.159 "allow_any_host": true, 00:35:47.159 "hosts": [] 00:35:47.159 }, 00:35:47.159 { 00:35:47.159 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:47.159 "subtype": "NVMe", 00:35:47.159 "listen_addresses": [ 00:35:47.159 { 00:35:47.159 "trtype": "TCP", 00:35:47.159 "adrfam": "IPv4", 00:35:47.159 "traddr": "10.0.0.2", 00:35:47.159 "trsvcid": "4420" 00:35:47.159 } 00:35:47.159 ], 00:35:47.159 "allow_any_host": true, 00:35:47.159 "hosts": [], 00:35:47.159 "serial_number": "SPDK00000000000001", 00:35:47.159 "model_number": "SPDK bdev Controller", 00:35:47.159 "max_namespaces": 1, 00:35:47.159 "min_cntlid": 1, 00:35:47.159 "max_cntlid": 65519, 00:35:47.159 "namespaces": [ 00:35:47.159 { 00:35:47.159 "nsid": 1, 00:35:47.159 "bdev_name": "Nvme0n1", 00:35:47.159 "name": "Nvme0n1", 00:35:47.159 "nguid": "36344730526055000025384500000031", 00:35:47.159 "uuid": "36344730-5260-5500-0025-384500000031" 00:35:47.159 } 00:35:47.159 ] 00:35:47.159 } 00:35:47.159 ] 00:35:47.159 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.159 07:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:47.159 07:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:47.159 07:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:47.421 07:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605500 00:35:47.421 07:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:47.421 07:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:47.421 07:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:47.421 07:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:47.421 07:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605500 '!=' S64GNE0R605500 ']' 00:35:47.421 07:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:47.421 07:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:47.421 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.421 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:47.421 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.421 07:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:47.421 07:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:47.421 07:50:05 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:47.421 07:50:05 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:47.421 07:50:05 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:47.421 07:50:05 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:47.421 07:50:05 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:47.421 07:50:05 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:47.421 rmmod nvme_tcp 00:35:47.421 rmmod nvme_fabrics 00:35:47.421 rmmod nvme_keyring 00:35:47.682 07:50:05 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:47.682 07:50:05 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:47.682 07:50:05 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:47.682 07:50:05 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3696686 ']' 00:35:47.683 07:50:05 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3696686 00:35:47.683 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 3696686 ']' 00:35:47.683 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 3696686 00:35:47.683 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:35:47.683 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:47.683 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3696686 00:35:47.683 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:47.683 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:47.683 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3696686' 00:35:47.683 killing process with pid 3696686 00:35:47.683 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 3696686 00:35:47.683 07:50:05 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 3696686 00:35:47.944 07:50:05 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:47.944 07:50:05 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:47.944 07:50:05 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:47.944 07:50:05 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:47.944 07:50:06 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:47.944 07:50:06 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:47.944 07:50:06 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:47.944 07:50:06 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:47.944 07:50:06 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:47.944 07:50:06 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.944 07:50:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:47.944 07:50:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:50.490 07:50:08 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:50.490 00:35:50.490 real 0m13.162s 00:35:50.490 user 0m9.815s 00:35:50.490 sys 0m6.763s 00:35:50.490 07:50:08 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:50.490 07:50:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:50.490 ************************************ 00:35:50.490 END TEST nvmf_identify_passthru 00:35:50.490 ************************************ 00:35:50.490 07:50:08 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:50.490 07:50:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:50.490 07:50:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:50.490 07:50:08 -- common/autotest_common.sh@10 -- # set +x 00:35:50.490 ************************************ 00:35:50.490 START TEST nvmf_dif 00:35:50.490 ************************************ 00:35:50.490 07:50:08 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:50.490 * Looking for test storage... 00:35:50.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:50.490 07:50:08 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:50.490 07:50:08 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:35:50.490 07:50:08 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:50.490 07:50:08 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:50.490 07:50:08 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:50.490 07:50:08 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:50.490 07:50:08 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:50.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.491 --rc genhtml_branch_coverage=1 00:35:50.491 --rc genhtml_function_coverage=1 00:35:50.491 --rc genhtml_legend=1 00:35:50.491 --rc geninfo_all_blocks=1 00:35:50.491 --rc geninfo_unexecuted_blocks=1 00:35:50.491 00:35:50.491 ' 00:35:50.491 07:50:08 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:50.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.491 --rc genhtml_branch_coverage=1 00:35:50.491 --rc genhtml_function_coverage=1 00:35:50.491 --rc genhtml_legend=1 00:35:50.491 --rc geninfo_all_blocks=1 00:35:50.491 --rc geninfo_unexecuted_blocks=1 00:35:50.491 00:35:50.491 ' 00:35:50.491 07:50:08 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:50.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.491 --rc genhtml_branch_coverage=1 00:35:50.491 --rc genhtml_function_coverage=1 00:35:50.491 --rc genhtml_legend=1 00:35:50.491 --rc geninfo_all_blocks=1 00:35:50.491 --rc geninfo_unexecuted_blocks=1 00:35:50.491 00:35:50.491 ' 00:35:50.491 07:50:08 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:50.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.491 --rc genhtml_branch_coverage=1 00:35:50.491 --rc genhtml_function_coverage=1 00:35:50.491 --rc genhtml_legend=1 00:35:50.491 --rc geninfo_all_blocks=1 00:35:50.491 --rc geninfo_unexecuted_blocks=1 00:35:50.491 00:35:50.491 ' 00:35:50.491 07:50:08 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:50.491 07:50:08 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:50.491 07:50:08 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:50.491 07:50:08 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:50.491 07:50:08 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:50.491 07:50:08 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.491 07:50:08 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.491 07:50:08 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.491 07:50:08 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:50.491 07:50:08 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:50.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:50.491 07:50:08 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:50.491 07:50:08 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:50.491 07:50:08 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:50.491 07:50:08 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:50.491 07:50:08 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:50.491 07:50:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:50.491 07:50:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:50.491 07:50:08 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:50.491 07:50:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:58.626 07:50:15 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:58.627 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:58.627 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:58.627 Found net devices under 0000:31:00.0: cvl_0_0 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:58.627 Found net devices under 0000:31:00.1: cvl_0_1 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:58.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:58.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:35:58.627 00:35:58.627 --- 10.0.0.2 ping statistics --- 00:35:58.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.627 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:58.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:58.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:35:58.627 00:35:58.627 --- 10.0.0.1 ping statistics --- 00:35:58.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.627 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:58.627 07:50:15 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:01.181 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:01.181 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:01.181 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:01.181 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:01.181 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:01.181 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:01.181 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:01.181 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:01.181 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:01.181 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:01.181 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:01.181 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:01.181 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:01.181 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:01.181 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:01.181 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:01.181 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:01.442 07:50:19 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:01.442 07:50:19 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:01.442 07:50:19 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:01.442 07:50:19 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:01.442 07:50:19 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:01.442 07:50:19 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:01.442 07:50:19 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:01.442 07:50:19 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:01.442 07:50:19 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:01.442 07:50:19 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:01.442 07:50:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:01.442 07:50:19 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3702910 00:36:01.442 07:50:19 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3702910 00:36:01.442 07:50:19 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:01.442 07:50:19 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 3702910 ']' 00:36:01.442 07:50:19 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.442 07:50:19 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:01.442 07:50:19 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.442 07:50:19 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:01.442 07:50:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:01.443 [2024-11-20 07:50:19.582687] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:36:01.443 [2024-11-20 07:50:19.582732] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:01.703 [2024-11-20 07:50:19.677253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.703 [2024-11-20 07:50:19.712699] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:01.703 [2024-11-20 07:50:19.712726] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:01.703 [2024-11-20 07:50:19.712733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:01.703 [2024-11-20 07:50:19.712740] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:01.703 [2024-11-20 07:50:19.712751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:01.703 [2024-11-20 07:50:19.713325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:02.274 07:50:20 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:02.274 07:50:20 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:36:02.274 07:50:20 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:02.274 07:50:20 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:02.274 07:50:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:02.274 07:50:20 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:02.274 07:50:20 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:02.274 07:50:20 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:02.274 07:50:20 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.274 07:50:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:02.274 [2024-11-20 07:50:20.446988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:02.274 07:50:20 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.274 07:50:20 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:02.274 07:50:20 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:02.274 07:50:20 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:02.274 07:50:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:02.536 ************************************ 00:36:02.536 START TEST fio_dif_1_default 00:36:02.536 ************************************ 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:02.536 bdev_null0 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:02.536 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:02.537 [2024-11-20 07:50:20.535464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:02.537 { 00:36:02.537 "params": { 00:36:02.537 "name": "Nvme$subsystem", 00:36:02.537 "trtype": "$TEST_TRANSPORT", 00:36:02.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.537 "adrfam": "ipv4", 00:36:02.537 "trsvcid": "$NVMF_PORT", 00:36:02.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.537 "hdgst": ${hdgst:-false}, 00:36:02.537 "ddgst": ${ddgst:-false} 00:36:02.537 }, 00:36:02.537 "method": "bdev_nvme_attach_controller" 00:36:02.537 } 00:36:02.537 EOF 00:36:02.537 )") 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:02.537 "params": { 00:36:02.537 "name": "Nvme0", 00:36:02.537 "trtype": "tcp", 00:36:02.537 "traddr": "10.0.0.2", 00:36:02.537 "adrfam": "ipv4", 00:36:02.537 "trsvcid": "4420", 00:36:02.537 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:02.537 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:02.537 "hdgst": false, 00:36:02.537 "ddgst": false 00:36:02.537 }, 00:36:02.537 "method": "bdev_nvme_attach_controller" 00:36:02.537 }' 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:02.537 07:50:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:02.798 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:02.798 fio-3.35 00:36:02.798 Starting 1 thread 00:36:15.043 00:36:15.043 filename0: (groupid=0, jobs=1): err= 0: pid=3703443: Wed Nov 20 07:50:31 2024 00:36:15.043 read: IOPS=240, BW=961KiB/s (984kB/s)(9632KiB/10022msec) 00:36:15.043 slat (nsec): min=5503, max=88484, avg=6701.90, stdev=2234.36 00:36:15.043 clat (usec): min=565, max=42133, avg=16628.17, stdev=19652.50 00:36:15.043 lat (usec): min=571, max=42169, avg=16634.87, stdev=19652.22 00:36:15.043 clat percentiles (usec): 00:36:15.043 | 1.00th=[ 603], 5.00th=[ 807], 10.00th=[ 832], 20.00th=[ 857], 00:36:15.043 | 30.00th=[ 898], 40.00th=[ 971], 50.00th=[ 1004], 60.00th=[ 1057], 00:36:15.043 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:15.043 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:15.043 | 99.99th=[42206] 00:36:15.043 bw ( KiB/s): min= 670, max= 3712, per=99.99%, avg=961.50, stdev=667.94, samples=20 00:36:15.043 iops : min= 167, max= 928, avg=240.35, stdev=167.00, samples=20 00:36:15.043 lat (usec) : 750=2.70%, 1000=46.10% 00:36:15.043 lat (msec) : 2=12.17%, 50=39.04% 00:36:15.043 cpu : usr=93.70%, sys=6.05%, ctx=16, majf=0, minf=310 00:36:15.043 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:15.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.043 issued rwts: total=2408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.043 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:15.043 00:36:15.043 Run status group 0 (all jobs): 00:36:15.043 READ: bw=961KiB/s (984kB/s), 961KiB/s-961KiB/s (984kB/s-984kB/s), io=9632KiB (9863kB), run=10022-10022msec 00:36:15.043 07:50:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:15.043 07:50:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:15.043 07:50:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:15.043 07:50:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.044 00:36:15.044 real 0m11.251s 00:36:15.044 user 0m19.610s 00:36:15.044 sys 0m1.051s 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:15.044 ************************************ 00:36:15.044 END TEST fio_dif_1_default 00:36:15.044 ************************************ 00:36:15.044 07:50:31 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:15.044 07:50:31 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:15.044 07:50:31 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:15.044 07:50:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:15.044 ************************************ 00:36:15.044 START TEST fio_dif_1_multi_subsystems 00:36:15.044 ************************************ 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:15.044 bdev_null0 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:15.044 [2024-11-20 07:50:31.866390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:15.044 bdev_null1 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:15.044 { 00:36:15.044 "params": { 00:36:15.044 "name": "Nvme$subsystem", 00:36:15.044 "trtype": "$TEST_TRANSPORT", 00:36:15.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:15.044 "adrfam": "ipv4", 00:36:15.044 "trsvcid": "$NVMF_PORT", 00:36:15.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:15.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:15.044 "hdgst": ${hdgst:-false}, 00:36:15.044 "ddgst": ${ddgst:-false} 00:36:15.044 }, 00:36:15.044 "method": "bdev_nvme_attach_controller" 00:36:15.044 } 00:36:15.044 EOF 00:36:15.044 )") 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:15.044 { 00:36:15.044 "params": { 00:36:15.044 "name": "Nvme$subsystem", 00:36:15.044 "trtype": "$TEST_TRANSPORT", 00:36:15.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:15.044 "adrfam": "ipv4", 00:36:15.044 "trsvcid": "$NVMF_PORT", 00:36:15.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:15.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:15.044 "hdgst": ${hdgst:-false}, 00:36:15.044 "ddgst": ${ddgst:-false} 00:36:15.044 }, 00:36:15.044 "method": "bdev_nvme_attach_controller" 00:36:15.044 } 00:36:15.044 EOF 00:36:15.044 )") 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:15.044 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:15.045 "params": { 00:36:15.045 "name": "Nvme0", 00:36:15.045 "trtype": "tcp", 00:36:15.045 "traddr": "10.0.0.2", 00:36:15.045 "adrfam": "ipv4", 00:36:15.045 "trsvcid": "4420", 00:36:15.045 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:15.045 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:15.045 "hdgst": false, 00:36:15.045 "ddgst": false 00:36:15.045 }, 00:36:15.045 "method": "bdev_nvme_attach_controller" 00:36:15.045 },{ 00:36:15.045 "params": { 00:36:15.045 "name": "Nvme1", 00:36:15.045 "trtype": "tcp", 00:36:15.045 "traddr": "10.0.0.2", 00:36:15.045 "adrfam": "ipv4", 00:36:15.045 "trsvcid": "4420", 00:36:15.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:15.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:15.045 "hdgst": false, 00:36:15.045 "ddgst": false 00:36:15.045 }, 00:36:15.045 "method": "bdev_nvme_attach_controller" 00:36:15.045 }' 00:36:15.045 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:15.045 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:15.045 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:15.045 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.045 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:15.045 07:50:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:15.045 07:50:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:15.045 07:50:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:15.045 07:50:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:15.045 07:50:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.045 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:15.045 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:15.045 fio-3.35 00:36:15.045 Starting 2 threads 00:36:25.146 00:36:25.146 filename0: (groupid=0, jobs=1): err= 0: pid=3705641: Wed Nov 20 07:50:43 2024 00:36:25.146 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10002msec) 00:36:25.146 slat (nsec): min=5486, max=32122, avg=6365.34, stdev=1454.95 00:36:25.146 clat (usec): min=589, max=42690, avg=21039.57, stdev=20152.96 00:36:25.146 lat (usec): min=595, max=42696, avg=21045.94, stdev=20152.92 00:36:25.146 clat percentiles (usec): 00:36:25.146 | 1.00th=[ 627], 5.00th=[ 799], 10.00th=[ 824], 20.00th=[ 848], 00:36:25.146 | 30.00th=[ 857], 40.00th=[ 881], 50.00th=[41157], 60.00th=[41157], 00:36:25.146 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:25.146 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:36:25.146 | 99.99th=[42730] 00:36:25.146 bw ( KiB/s): min= 704, max= 768, per=66.24%, avg=761.26, stdev=20.18, samples=19 00:36:25.146 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:36:25.146 lat (usec) : 750=2.68%, 1000=46.00% 00:36:25.146 lat (msec) : 2=1.21%, 50=50.11% 00:36:25.146 cpu : usr=95.55%, sys=4.25%, ctx=14, majf=0, minf=193 00:36:25.146 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.146 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.146 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:25.146 filename1: (groupid=0, jobs=1): err= 0: pid=3705642: Wed Nov 20 07:50:43 2024 00:36:25.146 read: IOPS=97, BW=391KiB/s (400kB/s)(3920KiB/10027msec) 00:36:25.146 slat (nsec): min=5543, max=33265, avg=6597.44, stdev=1621.14 00:36:25.146 clat (usec): min=840, max=42859, avg=40908.25, stdev=2584.11 00:36:25.146 lat (usec): min=846, max=42870, avg=40914.85, stdev=2584.20 00:36:25.146 clat percentiles (usec): 00:36:25.146 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:25.146 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:25.146 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:36:25.146 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:36:25.146 | 99.99th=[42730] 00:36:25.146 bw ( KiB/s): min= 384, max= 416, per=33.95%, avg=390.40, stdev=13.13, samples=20 00:36:25.146 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:36:25.146 lat (usec) : 1000=0.41% 00:36:25.146 lat (msec) : 50=99.59% 00:36:25.146 cpu : usr=95.66%, sys=4.14%, ctx=14, majf=0, minf=89 00:36:25.146 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.146 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.146 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:25.146 00:36:25.146 Run status group 0 (all jobs): 00:36:25.146 READ: bw=1149KiB/s (1176kB/s), 391KiB/s-760KiB/s (400kB/s-778kB/s), io=11.2MiB (11.8MB), run=10002-10027msec 00:36:25.146 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:25.146 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:25.146 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:25.146 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.147 00:36:25.147 real 0m11.347s 00:36:25.147 user 0m35.338s 00:36:25.147 sys 0m1.178s 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:25.147 07:50:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.147 ************************************ 00:36:25.147 END TEST fio_dif_1_multi_subsystems 00:36:25.147 ************************************ 00:36:25.147 07:50:43 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:25.147 07:50:43 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:25.147 07:50:43 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:25.147 07:50:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:25.147 ************************************ 00:36:25.147 START TEST fio_dif_rand_params 00:36:25.147 ************************************ 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.147 bdev_null0 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.147 [2024-11-20 07:50:43.298387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:25.147 { 00:36:25.147 "params": { 00:36:25.147 "name": "Nvme$subsystem", 00:36:25.147 "trtype": "$TEST_TRANSPORT", 00:36:25.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:25.147 "adrfam": "ipv4", 00:36:25.147 "trsvcid": "$NVMF_PORT", 00:36:25.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:25.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:25.147 "hdgst": ${hdgst:-false}, 00:36:25.147 "ddgst": ${ddgst:-false} 00:36:25.147 }, 00:36:25.147 "method": "bdev_nvme_attach_controller" 00:36:25.147 } 00:36:25.147 EOF 00:36:25.147 )") 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:25.147 "params": { 00:36:25.147 "name": "Nvme0", 00:36:25.147 "trtype": "tcp", 00:36:25.147 "traddr": "10.0.0.2", 00:36:25.147 "adrfam": "ipv4", 00:36:25.147 "trsvcid": "4420", 00:36:25.147 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:25.147 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:25.147 "hdgst": false, 00:36:25.147 "ddgst": false 00:36:25.147 }, 00:36:25.147 "method": "bdev_nvme_attach_controller" 00:36:25.147 }' 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:25.147 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:25.430 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:25.430 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:25.430 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:25.430 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:25.430 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:25.430 07:50:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:25.696 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:25.696 ... 00:36:25.696 fio-3.35 00:36:25.696 Starting 3 threads 00:36:32.277 00:36:32.277 filename0: (groupid=0, jobs=1): err= 0: pid=3707940: Wed Nov 20 07:50:49 2024 00:36:32.277 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(125MiB/5047msec) 00:36:32.277 slat (nsec): min=5590, max=36174, avg=7525.47, stdev=1946.77 00:36:32.277 clat (msec): min=4, max=130, avg=15.10, stdev=17.68 00:36:32.277 lat (msec): min=4, max=130, avg=15.11, stdev=17.68 00:36:32.277 clat percentiles (msec): 00:36:32.277 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:36:32.277 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:36:32.277 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 50], 95.00th=[ 51], 00:36:32.277 | 99.00th=[ 90], 99.50th=[ 91], 99.90th=[ 131], 99.95th=[ 131], 00:36:32.277 | 99.99th=[ 131] 00:36:32.277 bw ( KiB/s): min=16384, max=37376, per=22.74%, avg=25523.20, stdev=6311.26, samples=10 00:36:32.277 iops : min= 128, max= 292, avg=199.40, stdev=49.31, samples=10 00:36:32.277 lat (msec) : 10=80.48%, 20=4.20%, 50=10.01%, 100=5.21%, 250=0.10% 00:36:32.277 cpu : usr=94.95%, sys=4.80%, ctx=9, majf=0, minf=97 00:36:32.277 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.277 issued rwts: total=999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.277 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.277 filename0: (groupid=0, jobs=1): err= 0: pid=3707941: Wed Nov 20 07:50:49 2024 00:36:32.277 read: IOPS=339, BW=42.4MiB/s (44.5MB/s)(214MiB/5045msec) 00:36:32.277 slat (nsec): min=5561, max=75519, avg=6890.90, stdev=2633.12 00:36:32.277 clat (usec): min=3852, max=88507, avg=8801.95, stdev=6603.42 00:36:32.277 lat (usec): min=3858, max=88515, avg=8808.84, stdev=6603.42 00:36:32.277 clat percentiles (usec): 00:36:32.277 | 1.00th=[ 4621], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6390], 00:36:32.277 | 30.00th=[ 6915], 40.00th=[ 7439], 50.00th=[ 7832], 60.00th=[ 8160], 00:36:32.277 | 70.00th=[ 8717], 80.00th=[ 9372], 90.00th=[10028], 95.00th=[10814], 00:36:32.277 | 99.00th=[47449], 99.50th=[48497], 99.90th=[50594], 99.95th=[88605], 00:36:32.277 | 99.99th=[88605] 00:36:32.277 bw ( KiB/s): min=34048, max=50688, per=39.02%, avg=43801.60, stdev=5543.97, samples=10 00:36:32.277 iops : min= 266, max= 396, avg=342.20, stdev=43.31, samples=10 00:36:32.277 lat (msec) : 4=0.06%, 10=88.91%, 20=8.52%, 50=2.28%, 100=0.23% 00:36:32.277 cpu : usr=93.91%, sys=5.83%, ctx=15, majf=0, minf=178 00:36:32.277 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.277 issued rwts: total=1713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.277 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.277 filename0: (groupid=0, jobs=1): err= 0: pid=3707942: Wed Nov 20 07:50:49 2024 00:36:32.277 read: IOPS=339, BW=42.5MiB/s (44.5MB/s)(214MiB/5045msec) 00:36:32.277 slat (usec): min=5, max=317, avg= 7.91, stdev= 7.69 00:36:32.277 clat (usec): min=3936, max=87538, avg=8795.11, stdev=6748.13 00:36:32.277 lat (usec): min=3945, max=87547, avg=8803.02, stdev=6748.08 00:36:32.277 clat percentiles (usec): 00:36:32.277 | 1.00th=[ 4621], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6456], 00:36:32.277 | 30.00th=[ 7111], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8356], 00:36:32.277 | 70.00th=[ 8979], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[10814], 00:36:32.277 | 99.00th=[47973], 99.50th=[50070], 99.90th=[87557], 99.95th=[87557], 00:36:32.277 | 99.99th=[87557] 00:36:32.277 bw ( KiB/s): min=37120, max=50432, per=39.04%, avg=43827.20, stdev=4038.35, samples=10 00:36:32.277 iops : min= 290, max= 394, avg=342.40, stdev=31.55, samples=10 00:36:32.277 lat (msec) : 4=0.23%, 10=84.36%, 20=13.71%, 50=1.17%, 100=0.53% 00:36:32.277 cpu : usr=93.60%, sys=6.13%, ctx=7, majf=0, minf=84 00:36:32.277 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.277 issued rwts: total=1714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.277 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.277 00:36:32.277 Run status group 0 (all jobs): 00:36:32.277 READ: bw=110MiB/s (115MB/s), 24.7MiB/s-42.5MiB/s (25.9MB/s-44.5MB/s), io=553MiB (580MB), run=5045-5047msec 00:36:32.277 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:32.277 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:32.277 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:32.277 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:32.277 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:32.277 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:32.277 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.277 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.277 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.277 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:32.277 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.277 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.277 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.277 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:32.277 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.278 bdev_null0 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.278 [2024-11-20 07:50:49.615527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.278 bdev_null1 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.278 bdev_null2 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:32.278 { 00:36:32.278 "params": { 00:36:32.278 "name": "Nvme$subsystem", 00:36:32.278 "trtype": "$TEST_TRANSPORT", 00:36:32.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:32.278 "adrfam": "ipv4", 00:36:32.278 "trsvcid": "$NVMF_PORT", 00:36:32.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:32.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:32.278 "hdgst": ${hdgst:-false}, 00:36:32.278 "ddgst": ${ddgst:-false} 00:36:32.278 }, 00:36:32.278 "method": "bdev_nvme_attach_controller" 00:36:32.278 } 00:36:32.278 EOF 00:36:32.278 )") 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:32.278 { 00:36:32.278 "params": { 00:36:32.278 "name": "Nvme$subsystem", 00:36:32.278 "trtype": "$TEST_TRANSPORT", 00:36:32.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:32.278 "adrfam": "ipv4", 00:36:32.278 "trsvcid": "$NVMF_PORT", 00:36:32.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:32.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:32.278 "hdgst": ${hdgst:-false}, 00:36:32.278 "ddgst": ${ddgst:-false} 00:36:32.278 }, 00:36:32.278 "method": "bdev_nvme_attach_controller" 00:36:32.278 } 00:36:32.278 EOF 00:36:32.278 )") 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:32.278 07:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:32.279 { 00:36:32.279 "params": { 00:36:32.279 "name": "Nvme$subsystem", 00:36:32.279 "trtype": "$TEST_TRANSPORT", 00:36:32.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:32.279 "adrfam": "ipv4", 00:36:32.279 "trsvcid": "$NVMF_PORT", 00:36:32.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:32.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:32.279 "hdgst": ${hdgst:-false}, 00:36:32.279 "ddgst": ${ddgst:-false} 00:36:32.279 }, 00:36:32.279 "method": "bdev_nvme_attach_controller" 00:36:32.279 } 00:36:32.279 EOF 00:36:32.279 )") 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:32.279 "params": { 00:36:32.279 "name": "Nvme0", 00:36:32.279 "trtype": "tcp", 00:36:32.279 "traddr": "10.0.0.2", 00:36:32.279 "adrfam": "ipv4", 00:36:32.279 "trsvcid": "4420", 00:36:32.279 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:32.279 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:32.279 "hdgst": false, 00:36:32.279 "ddgst": false 00:36:32.279 }, 00:36:32.279 "method": "bdev_nvme_attach_controller" 00:36:32.279 },{ 00:36:32.279 "params": { 00:36:32.279 "name": "Nvme1", 00:36:32.279 "trtype": "tcp", 00:36:32.279 "traddr": "10.0.0.2", 00:36:32.279 "adrfam": "ipv4", 00:36:32.279 "trsvcid": "4420", 00:36:32.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:32.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:32.279 "hdgst": false, 00:36:32.279 "ddgst": false 00:36:32.279 }, 00:36:32.279 "method": "bdev_nvme_attach_controller" 00:36:32.279 },{ 00:36:32.279 "params": { 00:36:32.279 "name": "Nvme2", 00:36:32.279 "trtype": "tcp", 00:36:32.279 "traddr": "10.0.0.2", 00:36:32.279 "adrfam": "ipv4", 00:36:32.279 "trsvcid": "4420", 00:36:32.279 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:32.279 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:32.279 "hdgst": false, 00:36:32.279 "ddgst": false 00:36:32.279 }, 00:36:32.279 "method": "bdev_nvme_attach_controller" 00:36:32.279 }' 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:32.279 07:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.279 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:32.279 ... 00:36:32.279 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:32.279 ... 00:36:32.279 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:32.279 ... 00:36:32.279 fio-3.35 00:36:32.279 Starting 24 threads 00:36:44.504 00:36:44.504 filename0: (groupid=0, jobs=1): err= 0: pid=3709358: Wed Nov 20 07:51:01 2024 00:36:44.504 read: IOPS=674, BW=2697KiB/s (2762kB/s)(26.4MiB/10013msec) 00:36:44.504 slat (nsec): min=5685, max=83839, avg=12037.15, stdev=8793.02 00:36:44.504 clat (usec): min=6522, max=25343, avg=23625.66, stdev=1979.22 00:36:44.504 lat (usec): min=6540, max=25352, avg=23637.69, stdev=1978.58 00:36:44.504 clat percentiles (usec): 00:36:44.504 | 1.00th=[11994], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:44.504 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:44.504 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:44.504 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:36:44.504 | 99.99th=[25297] 00:36:44.504 bw ( KiB/s): min= 2560, max= 3072, per=4.17%, avg=2694.11, stdev=108.62, samples=19 00:36:44.504 iops : min= 640, max= 768, avg=673.47, stdev=27.16, samples=19 00:36:44.504 lat (msec) : 10=0.71%, 20=1.90%, 50=97.39% 00:36:44.504 cpu : usr=98.43%, sys=1.06%, ctx=199, majf=0, minf=71 00:36:44.504 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.504 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.504 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.504 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.504 filename0: (groupid=0, jobs=1): err= 0: pid=3709359: Wed Nov 20 07:51:01 2024 00:36:44.504 read: IOPS=668, BW=2673KiB/s (2737kB/s)(26.1MiB/10009msec) 00:36:44.504 slat (nsec): min=5602, max=91231, avg=16640.24, stdev=13615.77 00:36:44.504 clat (usec): min=13124, max=29164, avg=23816.77, stdev=981.86 00:36:44.504 lat (usec): min=13138, max=29172, avg=23833.41, stdev=981.68 00:36:44.504 clat percentiles (usec): 00:36:44.504 | 1.00th=[20055], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:44.504 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:44.504 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:44.504 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25297], 99.95th=[28967], 00:36:44.504 | 99.99th=[29230] 00:36:44.504 bw ( KiB/s): min= 2560, max= 2810, per=4.13%, avg=2667.16, stdev=63.33, samples=19 00:36:44.504 iops : min= 640, max= 702, avg=666.74, stdev=15.77, samples=19 00:36:44.504 lat (msec) : 20=0.85%, 50=99.15% 00:36:44.504 cpu : usr=98.72%, sys=0.94%, ctx=37, majf=0, minf=71 00:36:44.504 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.504 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.504 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.504 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.504 filename0: (groupid=0, jobs=1): err= 0: pid=3709360: Wed Nov 20 07:51:01 2024 00:36:44.504 read: IOPS=694, BW=2779KiB/s (2846kB/s)(27.2MiB/10005msec) 00:36:44.504 slat (nsec): min=5665, max=89007, avg=17083.42, stdev=12549.48 00:36:44.504 clat (usec): min=7064, max=42201, avg=22908.97, stdev=4288.16 00:36:44.504 lat (usec): min=7079, max=42223, avg=22926.06, stdev=4290.40 00:36:44.504 clat percentiles (usec): 00:36:44.504 | 1.00th=[10683], 5.00th=[15139], 10.00th=[17171], 20.00th=[20317], 00:36:44.504 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:44.504 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25560], 95.00th=[30016], 00:36:44.504 | 99.00th=[36963], 99.50th=[38011], 99.90th=[41681], 99.95th=[42206], 00:36:44.504 | 99.99th=[42206] 00:36:44.504 bw ( KiB/s): min= 2554, max= 3257, per=4.31%, avg=2782.58, stdev=189.81, samples=19 00:36:44.504 iops : min= 638, max= 814, avg=695.58, stdev=47.47, samples=19 00:36:44.504 lat (msec) : 10=0.76%, 20=18.46%, 50=80.78% 00:36:44.504 cpu : usr=98.57%, sys=0.92%, ctx=164, majf=0, minf=64 00:36:44.504 IO depths : 1=2.4%, 2=5.4%, 4=14.5%, 8=66.9%, 16=10.9%, 32=0.0%, >=64=0.0% 00:36:44.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.504 complete : 0=0.0%, 4=91.5%, 8=3.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.504 issued rwts: total=6951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.504 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.504 filename0: (groupid=0, jobs=1): err= 0: pid=3709361: Wed Nov 20 07:51:01 2024 00:36:44.504 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.0MiB/10003msec) 00:36:44.504 slat (nsec): min=5667, max=78008, avg=14985.49, stdev=11274.43 00:36:44.504 clat (usec): min=2798, max=44902, avg=23865.78, stdev=1918.36 00:36:44.504 lat (usec): min=2805, max=44918, avg=23880.76, stdev=1918.39 00:36:44.504 clat percentiles (usec): 00:36:44.504 | 1.00th=[18744], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:44.504 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:44.504 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:44.504 | 99.00th=[26608], 99.50th=[34866], 99.90th=[44827], 99.95th=[44827], 00:36:44.504 | 99.99th=[44827] 00:36:44.504 bw ( KiB/s): min= 2528, max= 2688, per=4.12%, avg=2658.74, stdev=57.03, samples=19 00:36:44.504 iops : min= 632, max= 672, avg=664.63, stdev=14.24, samples=19 00:36:44.504 lat (msec) : 4=0.13%, 10=0.10%, 20=1.18%, 50=98.58% 00:36:44.504 cpu : usr=99.03%, sys=0.69%, ctx=12, majf=0, minf=43 00:36:44.504 IO depths : 1=5.9%, 2=12.0%, 4=24.4%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:44.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.504 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.504 issued rwts: total=6668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.504 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.504 filename0: (groupid=0, jobs=1): err= 0: pid=3709362: Wed Nov 20 07:51:01 2024 00:36:44.504 read: IOPS=671, BW=2684KiB/s (2749kB/s)(26.2MiB/10014msec) 00:36:44.504 slat (nsec): min=5676, max=83731, avg=11618.25, stdev=8432.27 00:36:44.504 clat (usec): min=7389, max=26342, avg=23741.89, stdev=1697.32 00:36:44.504 lat (usec): min=7410, max=26349, avg=23753.51, stdev=1694.90 00:36:44.504 clat percentiles (usec): 00:36:44.504 | 1.00th=[12911], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:44.504 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:44.504 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:44.504 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26346], 99.95th=[26346], 00:36:44.504 | 99.99th=[26346] 00:36:44.504 bw ( KiB/s): min= 2560, max= 3072, per=4.15%, avg=2682.60, stdev=103.95, samples=20 00:36:44.504 iops : min= 640, max= 768, avg=670.90, stdev=25.75, samples=20 00:36:44.504 lat (msec) : 10=0.71%, 20=0.71%, 50=98.57% 00:36:44.504 cpu : usr=98.57%, sys=0.96%, ctx=98, majf=0, minf=41 00:36:44.504 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.504 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.504 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.504 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.504 filename0: (groupid=0, jobs=1): err= 0: pid=3709363: Wed Nov 20 07:51:01 2024 00:36:44.504 read: IOPS=666, BW=2665KiB/s (2729kB/s)(26.0MiB/10001msec) 00:36:44.504 slat (nsec): min=5327, max=84700, avg=21338.18, stdev=13675.76 00:36:44.504 clat (usec): min=11480, max=38751, avg=23809.33, stdev=1367.47 00:36:44.504 lat (usec): min=11486, max=38765, avg=23830.67, stdev=1367.12 00:36:44.504 clat percentiles (usec): 00:36:44.504 | 1.00th=[19530], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:44.504 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:44.504 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:44.504 | 99.00th=[26084], 99.50th=[30016], 99.90th=[38536], 99.95th=[38536], 00:36:44.504 | 99.99th=[38536] 00:36:44.504 bw ( KiB/s): min= 2560, max= 2688, per=4.11%, avg=2657.37, stdev=52.80, samples=19 00:36:44.504 iops : min= 640, max= 672, avg=664.32, stdev=13.20, samples=19 00:36:44.504 lat (msec) : 20=1.05%, 50=98.95% 00:36:44.504 cpu : usr=98.69%, sys=0.84%, ctx=71, majf=0, minf=37 00:36:44.504 IO depths : 1=6.0%, 2=12.0%, 4=24.3%, 8=51.2%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:44.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.505 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.505 issued rwts: total=6664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.505 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.505 filename0: (groupid=0, jobs=1): err= 0: pid=3709364: Wed Nov 20 07:51:01 2024 00:36:44.505 read: IOPS=683, BW=2732KiB/s (2798kB/s)(26.7MiB/10016msec) 00:36:44.505 slat (nsec): min=5668, max=82637, avg=18452.54, stdev=13035.04 00:36:44.505 clat (usec): min=10556, max=40437, avg=23273.15, stdev=3350.16 00:36:44.505 lat (usec): min=10563, max=40468, avg=23291.60, stdev=3352.04 00:36:44.505 clat percentiles (usec): 00:36:44.505 | 1.00th=[14353], 5.00th=[16581], 10.00th=[18482], 20.00th=[22414], 00:36:44.505 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:36:44.505 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[28181], 00:36:44.505 | 99.00th=[35390], 99.50th=[37487], 99.90th=[40109], 99.95th=[40633], 00:36:44.505 | 99.99th=[40633] 00:36:44.505 bw ( KiB/s): min= 2554, max= 2928, per=4.23%, avg=2729.89, stdev=111.14, samples=19 00:36:44.505 iops : min= 638, max= 732, avg=682.42, stdev=27.82, samples=19 00:36:44.505 lat (msec) : 20=13.87%, 50=86.13% 00:36:44.505 cpu : usr=97.40%, sys=1.57%, ctx=920, majf=0, minf=47 00:36:44.505 IO depths : 1=3.2%, 2=6.4%, 4=14.7%, 8=65.1%, 16=10.6%, 32=0.0%, >=64=0.0% 00:36:44.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.505 complete : 0=0.0%, 4=91.5%, 8=4.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.505 issued rwts: total=6842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.505 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.505 filename0: (groupid=0, jobs=1): err= 0: pid=3709365: Wed Nov 20 07:51:01 2024 00:36:44.505 read: IOPS=668, BW=2675KiB/s (2739kB/s)(26.1MiB/10003msec) 00:36:44.505 slat (nsec): min=5667, max=75390, avg=19004.00, stdev=12276.44 00:36:44.505 clat (usec): min=3002, max=45329, avg=23752.57, stdev=2529.78 00:36:44.505 lat (usec): min=3008, max=45346, avg=23771.58, stdev=2530.79 00:36:44.505 clat percentiles (usec): 00:36:44.505 | 1.00th=[13829], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:36:44.505 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:44.505 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:36:44.505 | 99.00th=[34866], 99.50th=[37487], 99.90th=[45351], 99.95th=[45351], 00:36:44.505 | 99.99th=[45351] 00:36:44.505 bw ( KiB/s): min= 2436, max= 2864, per=4.12%, avg=2661.79, stdev=87.98, samples=19 00:36:44.505 iops : min= 609, max= 716, avg=665.42, stdev=21.99, samples=19 00:36:44.505 lat (msec) : 4=0.15%, 10=0.03%, 20=3.33%, 50=96.49% 00:36:44.505 cpu : usr=99.04%, sys=0.61%, ctx=84, majf=0, minf=44 00:36:44.505 IO depths : 1=5.2%, 2=11.0%, 4=23.4%, 8=53.0%, 16=7.4%, 32=0.0%, >=64=0.0% 00:36:44.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.505 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.505 issued rwts: total=6690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.505 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.505 filename1: (groupid=0, jobs=1): err= 0: pid=3709366: Wed Nov 20 07:51:01 2024 00:36:44.505 read: IOPS=668, BW=2673KiB/s (2737kB/s)(26.1MiB/10007msec) 00:36:44.505 slat (nsec): min=5674, max=74526, avg=17720.54, stdev=11246.71 00:36:44.505 clat (usec): min=9153, max=26189, avg=23775.95, stdev=984.19 00:36:44.505 lat (usec): min=9163, max=26196, avg=23793.67, stdev=984.44 00:36:44.505 clat percentiles (usec): 00:36:44.505 | 1.00th=[19530], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:44.505 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:44.505 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:44.505 | 99.00th=[25297], 99.50th=[25297], 99.90th=[26084], 99.95th=[26084], 00:36:44.505 | 99.99th=[26084] 00:36:44.505 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2673.89, stdev=58.61, samples=19 00:36:44.505 iops : min= 640, max= 704, avg=668.42, stdev=14.65, samples=19 00:36:44.505 lat (msec) : 10=0.01%, 20=0.99%, 50=99.00% 00:36:44.505 cpu : usr=98.71%, sys=0.88%, ctx=98, majf=0, minf=54 00:36:44.505 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.505 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.505 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.505 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.505 filename1: (groupid=0, jobs=1): err= 0: pid=3709367: Wed Nov 20 07:51:01 2024 00:36:44.505 read: IOPS=672, BW=2691KiB/s (2755kB/s)(26.3MiB/10013msec) 00:36:44.505 slat (nsec): min=5672, max=99258, avg=9859.91, stdev=7367.80 00:36:44.505 clat (usec): min=7516, max=26265, avg=23698.85, stdev=1817.52 00:36:44.505 lat (usec): min=7545, max=26272, avg=23708.71, stdev=1815.27 00:36:44.505 clat percentiles (usec): 00:36:44.505 | 1.00th=[12911], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:44.505 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:44.505 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:44.505 | 99.00th=[25297], 99.50th=[25297], 99.90th=[26346], 99.95th=[26346], 00:36:44.505 | 99.99th=[26346] 00:36:44.505 bw ( KiB/s): min= 2560, max= 3072, per=4.17%, avg=2694.11, stdev=99.89, samples=19 00:36:44.505 iops : min= 640, max= 768, avg=673.47, stdev=24.98, samples=19 00:36:44.505 lat (msec) : 10=0.71%, 20=1.43%, 50=97.86% 00:36:44.505 cpu : usr=99.06%, sys=0.66%, ctx=10, majf=0, minf=34 00:36:44.505 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:44.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.505 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.505 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.505 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.505 filename1: (groupid=0, jobs=1): err= 0: pid=3709368: Wed Nov 20 07:51:01 2024 00:36:44.505 read: IOPS=671, BW=2687KiB/s (2752kB/s)(26.2MiB/10003msec) 00:36:44.505 slat (nsec): min=5673, max=90391, avg=14437.09, stdev=11249.01 00:36:44.505 clat (usec): min=9451, max=40732, avg=23744.45, stdev=3046.88 00:36:44.505 lat (usec): min=9460, max=40741, avg=23758.89, stdev=3047.64 00:36:44.505 clat percentiles (usec): 00:36:44.505 | 1.00th=[14353], 5.00th=[18220], 10.00th=[20317], 20.00th=[23462], 00:36:44.505 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:44.505 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25297], 95.00th=[28967], 00:36:44.505 | 99.00th=[33424], 99.50th=[35914], 99.90th=[40109], 99.95th=[40633], 00:36:44.505 | 99.99th=[40633] 00:36:44.505 bw ( KiB/s): min= 2560, max= 2778, per=4.15%, avg=2677.26, stdev=56.09, samples=19 00:36:44.505 iops : min= 640, max= 694, avg=669.26, stdev=13.97, samples=19 00:36:44.505 lat (msec) : 10=0.12%, 20=9.26%, 50=90.63% 00:36:44.505 cpu : usr=98.99%, sys=0.72%, ctx=12, majf=0, minf=44 00:36:44.505 IO depths : 1=0.6%, 2=1.2%, 4=5.0%, 8=77.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:36:44.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.505 complete : 0=0.0%, 4=89.8%, 8=8.2%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.505 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.505 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.505 filename1: (groupid=0, jobs=1): err= 0: pid=3709369: Wed Nov 20 07:51:01 2024 00:36:44.505 read: IOPS=664, BW=2660KiB/s (2724kB/s)(26.0MiB/10010msec) 00:36:44.505 slat (nsec): min=5453, max=90243, avg=21108.09, stdev=15060.68 00:36:44.505 clat (usec): min=11380, max=43511, avg=23883.81, stdev=1324.83 00:36:44.505 lat (usec): min=11393, max=43526, avg=23904.92, stdev=1324.04 00:36:44.505 clat percentiles (usec): 00:36:44.505 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:44.505 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:44.505 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:44.505 | 99.00th=[25560], 99.50th=[28181], 99.90th=[43254], 99.95th=[43254], 00:36:44.505 | 99.99th=[43254] 00:36:44.505 bw ( KiB/s): min= 2432, max= 2688, per=4.11%, avg=2652.74, stdev=72.39, samples=19 00:36:44.505 iops : min= 608, max= 672, avg=663.05, stdev=18.14, samples=19 00:36:44.505 lat (msec) : 20=0.48%, 50=99.52% 00:36:44.505 cpu : usr=99.08%, sys=0.63%, ctx=9, majf=0, minf=44 00:36:44.505 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:44.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.505 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.505 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.505 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.505 filename1: (groupid=0, jobs=1): err= 0: pid=3709370: Wed Nov 20 07:51:01 2024 00:36:44.505 read: IOPS=691, BW=2765KiB/s (2831kB/s)(27.0MiB/10009msec) 00:36:44.505 slat (nsec): min=5549, max=60308, avg=7871.50, stdev=3623.57 00:36:44.505 clat (usec): min=1267, max=25632, avg=23074.46, stdev=3854.11 00:36:44.505 lat (usec): min=1283, max=25642, avg=23082.33, stdev=3852.42 00:36:44.505 clat percentiles (usec): 00:36:44.505 | 1.00th=[ 1598], 5.00th=[17171], 10.00th=[23200], 20.00th=[23462], 00:36:44.505 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:44.505 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:44.505 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25560], 99.95th=[25560], 00:36:44.505 | 99.99th=[25560] 00:36:44.505 bw ( KiB/s): min= 2560, max= 4280, per=4.27%, avg=2758.32, stdev=377.98, samples=19 00:36:44.505 iops : min= 640, max= 1070, avg=689.58, stdev=94.49, samples=19 00:36:44.505 lat (msec) : 2=1.59%, 4=0.49%, 10=0.90%, 20=3.14%, 50=93.89% 00:36:44.505 cpu : usr=98.98%, sys=0.72%, ctx=14, majf=0, minf=141 00:36:44.505 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:44.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.505 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.505 issued rwts: total=6919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.506 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.506 filename1: (groupid=0, jobs=1): err= 0: pid=3709371: Wed Nov 20 07:51:01 2024 00:36:44.506 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.1MiB/10010msec) 00:36:44.506 slat (nsec): min=5682, max=85920, avg=15920.06, stdev=14320.89 00:36:44.506 clat (usec): min=14313, max=31534, avg=23877.68, stdev=1066.95 00:36:44.506 lat (usec): min=14321, max=31540, avg=23893.60, stdev=1065.79 00:36:44.506 clat percentiles (usec): 00:36:44.506 | 1.00th=[19006], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:44.506 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:44.506 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:44.506 | 99.00th=[26346], 99.50th=[30016], 99.90th=[31327], 99.95th=[31589], 00:36:44.506 | 99.99th=[31589] 00:36:44.506 bw ( KiB/s): min= 2560, max= 2693, per=4.12%, avg=2660.68, stdev=53.47, samples=19 00:36:44.506 iops : min= 640, max= 673, avg=665.11, stdev=13.34, samples=19 00:36:44.506 lat (msec) : 20=1.00%, 50=99.00% 00:36:44.506 cpu : usr=98.55%, sys=0.95%, ctx=64, majf=0, minf=38 00:36:44.506 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:44.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.506 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.506 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.506 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.506 filename1: (groupid=0, jobs=1): err= 0: pid=3709372: Wed Nov 20 07:51:01 2024 00:36:44.506 read: IOPS=672, BW=2691KiB/s (2755kB/s)(26.3MiB/10013msec) 00:36:44.506 slat (usec): min=5, max=109, avg=16.66, stdev=12.81 00:36:44.506 clat (usec): min=7280, max=25490, avg=23645.70, stdev=1810.32 00:36:44.506 lat (usec): min=7298, max=25517, avg=23662.36, stdev=1808.97 00:36:44.506 clat percentiles (usec): 00:36:44.506 | 1.00th=[12518], 5.00th=[22938], 10.00th=[23462], 20.00th=[23462], 00:36:44.506 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:44.506 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:44.506 | 99.00th=[25035], 99.50th=[25297], 99.90th=[25297], 99.95th=[25297], 00:36:44.506 | 99.99th=[25560] 00:36:44.506 bw ( KiB/s): min= 2560, max= 3072, per=4.17%, avg=2694.11, stdev=108.22, samples=19 00:36:44.506 iops : min= 640, max= 768, avg=673.47, stdev=27.03, samples=19 00:36:44.506 lat (msec) : 10=0.71%, 20=1.43%, 50=97.86% 00:36:44.506 cpu : usr=98.92%, sys=0.68%, ctx=68, majf=0, minf=57 00:36:44.506 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.506 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.506 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.506 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.506 filename1: (groupid=0, jobs=1): err= 0: pid=3709373: Wed Nov 20 07:51:01 2024 00:36:44.506 read: IOPS=665, BW=2662KiB/s (2726kB/s)(26.0MiB/10002msec) 00:36:44.506 slat (nsec): min=5723, max=87274, avg=21256.53, stdev=12262.00 00:36:44.506 clat (usec): min=12647, max=38486, avg=23851.93, stdev=1192.75 00:36:44.506 lat (usec): min=12670, max=38502, avg=23873.19, stdev=1192.03 00:36:44.506 clat percentiles (usec): 00:36:44.506 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:44.506 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:44.506 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:44.506 | 99.00th=[26084], 99.50th=[30278], 99.90th=[38536], 99.95th=[38536], 00:36:44.506 | 99.99th=[38536] 00:36:44.506 bw ( KiB/s): min= 2560, max= 2688, per=4.11%, avg=2654.00, stdev=57.73, samples=19 00:36:44.506 iops : min= 640, max= 672, avg=663.47, stdev=14.42, samples=19 00:36:44.506 lat (msec) : 20=0.68%, 50=99.32% 00:36:44.506 cpu : usr=99.12%, sys=0.58%, ctx=11, majf=0, minf=40 00:36:44.506 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.506 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.506 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.506 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.506 filename2: (groupid=0, jobs=1): err= 0: pid=3709374: Wed Nov 20 07:51:01 2024 00:36:44.506 read: IOPS=672, BW=2689KiB/s (2753kB/s)(26.3MiB/10021msec) 00:36:44.506 slat (nsec): min=5673, max=85553, avg=22993.00, stdev=13940.28 00:36:44.506 clat (usec): min=7610, max=33953, avg=23595.40, stdev=1836.72 00:36:44.506 lat (usec): min=7643, max=33960, avg=23618.39, stdev=1836.85 00:36:44.506 clat percentiles (usec): 00:36:44.506 | 1.00th=[12911], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:44.506 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:44.506 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:36:44.506 | 99.00th=[25822], 99.50th=[26084], 99.90th=[33817], 99.95th=[33817], 00:36:44.506 | 99.99th=[33817] 00:36:44.506 bw ( KiB/s): min= 2560, max= 3072, per=4.16%, avg=2687.40, stdev=101.82, samples=20 00:36:44.506 iops : min= 640, max= 768, avg=671.80, stdev=25.46, samples=20 00:36:44.506 lat (msec) : 10=0.68%, 20=1.66%, 50=97.65% 00:36:44.506 cpu : usr=98.75%, sys=0.84%, ctx=91, majf=0, minf=67 00:36:44.506 IO depths : 1=6.0%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:44.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.506 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.506 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.506 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.506 filename2: (groupid=0, jobs=1): err= 0: pid=3709375: Wed Nov 20 07:51:01 2024 00:36:44.506 read: IOPS=682, BW=2730KiB/s (2795kB/s)(26.7MiB/10014msec) 00:36:44.506 slat (nsec): min=5671, max=90948, avg=15408.71, stdev=11578.23 00:36:44.506 clat (usec): min=8285, max=41988, avg=23340.37, stdev=4079.76 00:36:44.506 lat (usec): min=8318, max=42003, avg=23355.78, stdev=4081.35 00:36:44.506 clat percentiles (usec): 00:36:44.506 | 1.00th=[12911], 5.00th=[15401], 10.00th=[17957], 20.00th=[21627], 00:36:44.506 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:44.506 | 70.00th=[24249], 80.00th=[24511], 90.00th=[26346], 95.00th=[30016], 00:36:44.506 | 99.00th=[38011], 99.50th=[38536], 99.90th=[41157], 99.95th=[41681], 00:36:44.506 | 99.99th=[42206] 00:36:44.506 bw ( KiB/s): min= 2560, max= 2960, per=4.23%, avg=2731.16, stdev=124.94, samples=19 00:36:44.506 iops : min= 640, max= 740, avg=682.74, stdev=31.25, samples=19 00:36:44.506 lat (msec) : 10=0.23%, 20=15.54%, 50=84.23% 00:36:44.506 cpu : usr=99.03%, sys=0.66%, ctx=13, majf=0, minf=43 00:36:44.506 IO depths : 1=1.8%, 2=3.7%, 4=10.7%, 8=71.3%, 16=12.5%, 32=0.0%, >=64=0.0% 00:36:44.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.506 complete : 0=0.0%, 4=90.6%, 8=5.4%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.506 issued rwts: total=6834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.506 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.506 filename2: (groupid=0, jobs=1): err= 0: pid=3709376: Wed Nov 20 07:51:01 2024 00:36:44.506 read: IOPS=676, BW=2705KiB/s (2770kB/s)(26.4MiB/10002msec) 00:36:44.506 slat (nsec): min=5441, max=86519, avg=20531.01, stdev=14184.91 00:36:44.506 clat (usec): min=7746, max=39951, avg=23487.23, stdev=2467.93 00:36:44.506 lat (usec): min=7753, max=39967, avg=23507.76, stdev=2469.75 00:36:44.506 clat percentiles (usec): 00:36:44.506 | 1.00th=[14615], 5.00th=[18220], 10.00th=[22938], 20.00th=[23462], 00:36:44.506 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:44.506 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:44.506 | 99.00th=[31327], 99.50th=[33424], 99.90th=[39060], 99.95th=[39060], 00:36:44.506 | 99.99th=[40109] 00:36:44.506 bw ( KiB/s): min= 2560, max= 2992, per=4.18%, avg=2699.47, stdev=101.64, samples=19 00:36:44.506 iops : min= 640, max= 748, avg=674.84, stdev=25.41, samples=19 00:36:44.506 lat (msec) : 10=0.19%, 20=6.09%, 50=93.72% 00:36:44.506 cpu : usr=99.05%, sys=0.63%, ctx=43, majf=0, minf=64 00:36:44.506 IO depths : 1=3.6%, 2=8.3%, 4=19.7%, 8=58.5%, 16=9.8%, 32=0.0%, >=64=0.0% 00:36:44.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.506 complete : 0=0.0%, 4=93.0%, 8=2.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.506 issued rwts: total=6764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.506 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.506 filename2: (groupid=0, jobs=1): err= 0: pid=3709377: Wed Nov 20 07:51:01 2024 00:36:44.506 read: IOPS=672, BW=2691KiB/s (2755kB/s)(26.3MiB/10004msec) 00:36:44.506 slat (nsec): min=5666, max=79753, avg=16607.11, stdev=12982.03 00:36:44.506 clat (usec): min=5031, max=68292, avg=23690.56, stdev=3899.33 00:36:44.506 lat (usec): min=5037, max=68309, avg=23707.17, stdev=3899.85 00:36:44.506 clat percentiles (usec): 00:36:44.506 | 1.00th=[12780], 5.00th=[17171], 10.00th=[19530], 20.00th=[23200], 00:36:44.506 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:44.506 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25560], 95.00th=[29754], 00:36:44.506 | 99.00th=[35914], 99.50th=[38011], 99.90th=[56886], 99.95th=[56886], 00:36:44.506 | 99.99th=[68682] 00:36:44.506 bw ( KiB/s): min= 2464, max= 2896, per=4.16%, avg=2684.74, stdev=93.11, samples=19 00:36:44.506 iops : min= 616, max= 724, avg=671.16, stdev=23.25, samples=19 00:36:44.506 lat (msec) : 10=0.15%, 20=10.63%, 50=88.99%, 100=0.24% 00:36:44.506 cpu : usr=99.01%, sys=0.69%, ctx=13, majf=0, minf=48 00:36:44.506 IO depths : 1=0.8%, 2=2.1%, 4=7.8%, 8=74.6%, 16=14.8%, 32=0.0%, >=64=0.0% 00:36:44.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.506 complete : 0=0.0%, 4=90.5%, 8=6.7%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.506 issued rwts: total=6729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.506 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.506 filename2: (groupid=0, jobs=1): err= 0: pid=3709378: Wed Nov 20 07:51:01 2024 00:36:44.506 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.0MiB/10003msec) 00:36:44.506 slat (nsec): min=5606, max=97595, avg=15814.23, stdev=10928.39 00:36:44.506 clat (usec): min=3096, max=45049, avg=23860.16, stdev=1554.60 00:36:44.506 lat (usec): min=3102, max=45067, avg=23875.97, stdev=1554.87 00:36:44.506 clat percentiles (usec): 00:36:44.506 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:44.506 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:44.506 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:44.507 | 99.00th=[25035], 99.50th=[25297], 99.90th=[44827], 99.95th=[44827], 00:36:44.507 | 99.99th=[44827] 00:36:44.507 bw ( KiB/s): min= 2560, max= 2688, per=4.11%, avg=2653.68, stdev=57.55, samples=19 00:36:44.507 iops : min= 640, max= 672, avg=663.37, stdev=14.36, samples=19 00:36:44.507 lat (msec) : 4=0.06%, 10=0.18%, 20=0.30%, 50=99.46% 00:36:44.507 cpu : usr=98.81%, sys=0.90%, ctx=12, majf=0, minf=49 00:36:44.507 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.507 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.507 issued rwts: total=6666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.507 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.507 filename2: (groupid=0, jobs=1): err= 0: pid=3709379: Wed Nov 20 07:51:01 2024 00:36:44.507 read: IOPS=669, BW=2677KiB/s (2741kB/s)(26.1MiB/10003msec) 00:36:44.507 slat (nsec): min=5352, max=82979, avg=17541.89, stdev=13718.27 00:36:44.507 clat (usec): min=9100, max=34968, avg=23757.48, stdev=1660.36 00:36:44.507 lat (usec): min=9107, max=34982, avg=23775.03, stdev=1660.70 00:36:44.507 clat percentiles (usec): 00:36:44.507 | 1.00th=[15139], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:44.507 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:44.507 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:44.507 | 99.00th=[27657], 99.50th=[32375], 99.90th=[34866], 99.95th=[34866], 00:36:44.507 | 99.99th=[34866] 00:36:44.507 bw ( KiB/s): min= 2560, max= 2736, per=4.13%, avg=2665.47, stdev=52.03, samples=19 00:36:44.507 iops : min= 640, max= 684, avg=666.32, stdev=12.97, samples=19 00:36:44.507 lat (msec) : 10=0.06%, 20=2.00%, 50=97.94% 00:36:44.507 cpu : usr=98.90%, sys=0.76%, ctx=82, majf=0, minf=56 00:36:44.507 IO depths : 1=3.4%, 2=7.6%, 4=16.8%, 8=61.2%, 16=11.0%, 32=0.0%, >=64=0.0% 00:36:44.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.507 complete : 0=0.0%, 4=92.5%, 8=3.6%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.507 issued rwts: total=6694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.507 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.507 filename2: (groupid=0, jobs=1): err= 0: pid=3709380: Wed Nov 20 07:51:01 2024 00:36:44.507 read: IOPS=668, BW=2673KiB/s (2737kB/s)(26.1MiB/10009msec) 00:36:44.507 slat (nsec): min=5694, max=84029, avg=21506.15, stdev=14412.19 00:36:44.507 clat (usec): min=13217, max=29063, avg=23756.04, stdev=964.33 00:36:44.507 lat (usec): min=13226, max=29069, avg=23777.54, stdev=964.74 00:36:44.507 clat percentiles (usec): 00:36:44.507 | 1.00th=[20055], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:44.507 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:44.507 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:36:44.507 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:36:44.507 | 99.99th=[28967] 00:36:44.507 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2667.16, stdev=47.71, samples=19 00:36:44.507 iops : min= 640, max= 672, avg=666.74, stdev=11.91, samples=19 00:36:44.507 lat (msec) : 20=0.90%, 50=99.10% 00:36:44.507 cpu : usr=98.75%, sys=0.74%, ctx=176, majf=0, minf=53 00:36:44.507 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.507 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.507 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.507 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.507 filename2: (groupid=0, jobs=1): err= 0: pid=3709381: Wed Nov 20 07:51:01 2024 00:36:44.507 read: IOPS=690, BW=2761KiB/s (2828kB/s)(27.0MiB/10004msec) 00:36:44.507 slat (nsec): min=5593, max=88589, avg=15631.34, stdev=12894.02 00:36:44.507 clat (usec): min=5823, max=38365, avg=23065.44, stdev=3836.29 00:36:44.507 lat (usec): min=5829, max=38380, avg=23081.07, stdev=3837.71 00:36:44.507 clat percentiles (usec): 00:36:44.507 | 1.00th=[13829], 5.00th=[15664], 10.00th=[17695], 20.00th=[20579], 00:36:44.507 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:36:44.507 | 70.00th=[23987], 80.00th=[24249], 90.00th=[26346], 95.00th=[29492], 00:36:44.507 | 99.00th=[34341], 99.50th=[36963], 99.90th=[38536], 99.95th=[38536], 00:36:44.507 | 99.99th=[38536] 00:36:44.507 bw ( KiB/s): min= 2560, max= 2976, per=4.27%, avg=2759.26, stdev=135.44, samples=19 00:36:44.507 iops : min= 640, max= 744, avg=689.79, stdev=33.83, samples=19 00:36:44.507 lat (msec) : 10=0.32%, 20=18.53%, 50=81.15% 00:36:44.507 cpu : usr=98.95%, sys=0.74%, ctx=53, majf=0, minf=59 00:36:44.507 IO depths : 1=2.1%, 2=4.5%, 4=11.7%, 8=69.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:36:44.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.507 complete : 0=0.0%, 4=90.8%, 8=5.2%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.507 issued rwts: total=6906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.507 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.507 00:36:44.507 Run status group 0 (all jobs): 00:36:44.507 READ: bw=63.1MiB/s (66.1MB/s), 2660KiB/s-2779KiB/s (2724kB/s-2846kB/s), io=632MiB (663MB), run=10001-10021msec 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.507 bdev_null0 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:44.507 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.508 [2024-11-20 07:51:01.644978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.508 bdev_null1 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:44.508 { 00:36:44.508 "params": { 00:36:44.508 "name": "Nvme$subsystem", 00:36:44.508 "trtype": "$TEST_TRANSPORT", 00:36:44.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:44.508 "adrfam": "ipv4", 00:36:44.508 "trsvcid": "$NVMF_PORT", 00:36:44.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:44.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:44.508 "hdgst": ${hdgst:-false}, 00:36:44.508 "ddgst": ${ddgst:-false} 00:36:44.508 }, 00:36:44.508 "method": "bdev_nvme_attach_controller" 00:36:44.508 } 00:36:44.508 EOF 00:36:44.508 )") 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:44.508 { 00:36:44.508 "params": { 00:36:44.508 "name": "Nvme$subsystem", 00:36:44.508 "trtype": "$TEST_TRANSPORT", 00:36:44.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:44.508 "adrfam": "ipv4", 00:36:44.508 "trsvcid": "$NVMF_PORT", 00:36:44.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:44.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:44.508 "hdgst": ${hdgst:-false}, 00:36:44.508 "ddgst": ${ddgst:-false} 00:36:44.508 }, 00:36:44.508 "method": "bdev_nvme_attach_controller" 00:36:44.508 } 00:36:44.508 EOF 00:36:44.508 )") 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:44.508 "params": { 00:36:44.508 "name": "Nvme0", 00:36:44.508 "trtype": "tcp", 00:36:44.508 "traddr": "10.0.0.2", 00:36:44.508 "adrfam": "ipv4", 00:36:44.508 "trsvcid": "4420", 00:36:44.508 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:44.508 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:44.508 "hdgst": false, 00:36:44.508 "ddgst": false 00:36:44.508 }, 00:36:44.508 "method": "bdev_nvme_attach_controller" 00:36:44.508 },{ 00:36:44.508 "params": { 00:36:44.508 "name": "Nvme1", 00:36:44.508 "trtype": "tcp", 00:36:44.508 "traddr": "10.0.0.2", 00:36:44.508 "adrfam": "ipv4", 00:36:44.508 "trsvcid": "4420", 00:36:44.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:44.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:44.508 "hdgst": false, 00:36:44.508 "ddgst": false 00:36:44.508 }, 00:36:44.508 "method": "bdev_nvme_attach_controller" 00:36:44.508 }' 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:44.508 07:51:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:44.508 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:44.508 ... 00:36:44.508 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:44.508 ... 00:36:44.508 fio-3.35 00:36:44.508 Starting 4 threads 00:36:51.100 00:36:51.100 filename0: (groupid=0, jobs=1): err= 0: pid=3711981: Wed Nov 20 07:51:08 2024 00:36:51.100 read: IOPS=2935, BW=22.9MiB/s (24.0MB/s)(115MiB/5001msec) 00:36:51.100 slat (nsec): min=5494, max=76595, avg=6386.85, stdev=2596.20 00:36:51.100 clat (usec): min=905, max=5110, avg=2707.57, stdev=332.93 00:36:51.100 lat (usec): min=910, max=5133, avg=2713.96, stdev=333.00 00:36:51.100 clat percentiles (usec): 00:36:51.100 | 1.00th=[ 2024], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2573], 00:36:51.100 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:51.100 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 3425], 00:36:51.100 | 99.00th=[ 4047], 99.50th=[ 4228], 99.90th=[ 4752], 99.95th=[ 5014], 00:36:51.100 | 99.99th=[ 5080] 00:36:51.100 bw ( KiB/s): min=23344, max=23792, per=24.81%, avg=23487.78, stdev=140.34, samples=9 00:36:51.100 iops : min= 2918, max= 2974, avg=2935.89, stdev=17.61, samples=9 00:36:51.100 lat (usec) : 1000=0.02% 00:36:51.100 lat (msec) : 2=0.80%, 4=97.72%, 10=1.46% 00:36:51.100 cpu : usr=96.16%, sys=3.46%, ctx=172, majf=0, minf=9 00:36:51.100 IO depths : 1=0.1%, 2=0.2%, 4=73.1%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.100 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.100 issued rwts: total=14681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.100 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:51.100 filename0: (groupid=0, jobs=1): err= 0: pid=3711982: Wed Nov 20 07:51:08 2024 00:36:51.100 read: IOPS=3010, BW=23.5MiB/s (24.7MB/s)(118MiB/5002msec) 00:36:51.100 slat (nsec): min=5493, max=74049, avg=6413.44, stdev=2475.47 00:36:51.100 clat (usec): min=948, max=4295, avg=2642.39, stdev=239.37 00:36:51.100 lat (usec): min=957, max=4301, avg=2648.80, stdev=239.43 00:36:51.100 clat percentiles (usec): 00:36:51.100 | 1.00th=[ 1942], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2540], 00:36:51.100 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:51.100 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2802], 95.00th=[ 2933], 00:36:51.100 | 99.00th=[ 3556], 99.50th=[ 3687], 99.90th=[ 4015], 99.95th=[ 4178], 00:36:51.100 | 99.99th=[ 4293] 00:36:51.100 bw ( KiB/s): min=23872, max=24336, per=25.43%, avg=24069.33, stdev=154.09, samples=9 00:36:51.100 iops : min= 2984, max= 3042, avg=3008.67, stdev=19.26, samples=9 00:36:51.100 lat (usec) : 1000=0.02% 00:36:51.100 lat (msec) : 2=1.51%, 4=98.35%, 10=0.12% 00:36:51.100 cpu : usr=96.16%, sys=3.60%, ctx=5, majf=0, minf=9 00:36:51.100 IO depths : 1=0.1%, 2=0.2%, 4=66.6%, 8=33.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.100 complete : 0=0.0%, 4=96.8%, 8=3.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.100 issued rwts: total=15057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.100 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:51.100 filename1: (groupid=0, jobs=1): err= 0: pid=3711983: Wed Nov 20 07:51:08 2024 00:36:51.100 read: IOPS=2970, BW=23.2MiB/s (24.3MB/s)(116MiB/5002msec) 00:36:51.100 slat (nsec): min=5509, max=79043, avg=6483.68, stdev=2545.83 00:36:51.100 clat (usec): min=1157, max=4976, avg=2676.07, stdev=243.55 00:36:51.100 lat (usec): min=1163, max=4996, avg=2682.56, stdev=243.67 00:36:51.100 clat percentiles (usec): 00:36:51.100 | 1.00th=[ 2147], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2540], 00:36:51.100 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:51.100 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2868], 95.00th=[ 2966], 00:36:51.100 | 99.00th=[ 3851], 99.50th=[ 3982], 99.90th=[ 4359], 99.95th=[ 4621], 00:36:51.100 | 99.99th=[ 4948] 00:36:51.100 bw ( KiB/s): min=23248, max=23952, per=25.11%, avg=23767.11, stdev=211.98, samples=9 00:36:51.100 iops : min= 2906, max= 2994, avg=2970.89, stdev=26.50, samples=9 00:36:51.100 lat (msec) : 2=0.57%, 4=98.96%, 10=0.47% 00:36:51.100 cpu : usr=95.82%, sys=3.94%, ctx=6, majf=0, minf=9 00:36:51.100 IO depths : 1=0.1%, 2=0.2%, 4=71.4%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.100 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.100 issued rwts: total=14857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.100 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:51.100 filename1: (groupid=0, jobs=1): err= 0: pid=3711984: Wed Nov 20 07:51:08 2024 00:36:51.100 read: IOPS=2917, BW=22.8MiB/s (23.9MB/s)(114MiB/5002msec) 00:36:51.100 slat (nsec): min=5509, max=71287, avg=6199.43, stdev=2273.91 00:36:51.100 clat (usec): min=1378, max=5602, avg=2725.22, stdev=325.87 00:36:51.100 lat (usec): min=1384, max=5629, avg=2731.42, stdev=325.96 00:36:51.100 clat percentiles (usec): 00:36:51.100 | 1.00th=[ 2212], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2573], 00:36:51.100 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:36:51.100 | 70.00th=[ 2704], 80.00th=[ 2769], 90.00th=[ 2933], 95.00th=[ 3392], 00:36:51.100 | 99.00th=[ 4080], 99.50th=[ 4293], 99.90th=[ 4686], 99.95th=[ 5276], 00:36:51.100 | 99.99th=[ 5342] 00:36:51.100 bw ( KiB/s): min=23024, max=23648, per=24.66%, avg=23342.00, stdev=186.47, samples=9 00:36:51.100 iops : min= 2878, max= 2956, avg=2917.67, stdev=23.37, samples=9 00:36:51.100 lat (msec) : 2=0.28%, 4=98.01%, 10=1.71% 00:36:51.100 cpu : usr=96.40%, sys=3.34%, ctx=7, majf=0, minf=11 00:36:51.100 IO depths : 1=0.1%, 2=0.2%, 4=72.7%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.100 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.100 issued rwts: total=14592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.100 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:51.100 00:36:51.100 Run status group 0 (all jobs): 00:36:51.100 READ: bw=92.4MiB/s (96.9MB/s), 22.8MiB/s-23.5MiB/s (23.9MB/s-24.7MB/s), io=462MiB (485MB), run=5001-5002msec 00:36:51.100 07:51:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:51.100 07:51:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:51.100 07:51:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:51.100 07:51:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:51.100 07:51:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:51.100 07:51:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:51.100 07:51:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.100 07:51:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.100 07:51:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.100 07:51:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:51.100 07:51:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.100 07:51:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.100 07:51:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.100 07:51:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:51.100 07:51:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:51.101 07:51:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:51.101 07:51:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:51.101 07:51:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.101 07:51:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.101 07:51:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.101 07:51:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:51.101 07:51:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.101 07:51:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.101 07:51:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.101 00:36:51.101 real 0m24.976s 00:36:51.101 user 5m17.377s 00:36:51.101 sys 0m4.598s 00:36:51.101 07:51:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:51.101 07:51:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.101 ************************************ 00:36:51.101 END TEST fio_dif_rand_params 00:36:51.101 ************************************ 00:36:51.101 07:51:08 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:51.101 07:51:08 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:51.101 07:51:08 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:51.101 07:51:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:51.101 ************************************ 00:36:51.101 START TEST fio_dif_digest 00:36:51.101 ************************************ 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:51.101 bdev_null0 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:51.101 [2024-11-20 07:51:08.359589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:51.101 { 00:36:51.101 "params": { 00:36:51.101 "name": "Nvme$subsystem", 00:36:51.101 "trtype": "$TEST_TRANSPORT", 00:36:51.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:51.101 "adrfam": "ipv4", 00:36:51.101 "trsvcid": "$NVMF_PORT", 00:36:51.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:51.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:51.101 "hdgst": ${hdgst:-false}, 00:36:51.101 "ddgst": ${ddgst:-false} 00:36:51.101 }, 00:36:51.101 "method": "bdev_nvme_attach_controller" 00:36:51.101 } 00:36:51.101 EOF 00:36:51.101 )") 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:51.101 "params": { 00:36:51.101 "name": "Nvme0", 00:36:51.101 "trtype": "tcp", 00:36:51.101 "traddr": "10.0.0.2", 00:36:51.101 "adrfam": "ipv4", 00:36:51.101 "trsvcid": "4420", 00:36:51.101 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:51.101 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:51.101 "hdgst": true, 00:36:51.101 "ddgst": true 00:36:51.101 }, 00:36:51.101 "method": "bdev_nvme_attach_controller" 00:36:51.101 }' 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:51.101 07:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:51.101 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:51.101 ... 00:36:51.101 fio-3.35 00:36:51.101 Starting 3 threads 00:37:03.330 00:37:03.330 filename0: (groupid=0, jobs=1): err= 0: pid=3713183: Wed Nov 20 07:51:19 2024 00:37:03.330 read: IOPS=340, BW=42.6MiB/s (44.7MB/s)(428MiB/10047msec) 00:37:03.330 slat (nsec): min=5844, max=32491, avg=6574.49, stdev=1076.26 00:37:03.330 clat (usec): min=5725, max=50878, avg=8776.62, stdev=1571.88 00:37:03.330 lat (usec): min=5731, max=50885, avg=8783.20, stdev=1571.84 00:37:03.330 clat percentiles (usec): 00:37:03.330 | 1.00th=[ 6259], 5.00th=[ 6783], 10.00th=[ 7111], 20.00th=[ 7504], 00:37:03.330 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9241], 00:37:03.330 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 00:37:03.330 | 99.00th=[11207], 99.50th=[11469], 99.90th=[12780], 99.95th=[47449], 00:37:03.330 | 99.99th=[51119] 00:37:03.330 bw ( KiB/s): min=41472, max=46592, per=39.85%, avg=43827.20, stdev=1383.85, samples=20 00:37:03.330 iops : min= 324, max= 364, avg=342.40, stdev=10.81, samples=20 00:37:03.330 lat (msec) : 10=83.54%, 20=16.40%, 50=0.03%, 100=0.03% 00:37:03.330 cpu : usr=94.20%, sys=5.56%, ctx=12, majf=0, minf=154 00:37:03.330 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.330 issued rwts: total=3426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.330 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:03.330 filename0: (groupid=0, jobs=1): err= 0: pid=3713184: Wed Nov 20 07:51:19 2024 00:37:03.330 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(244MiB/10047msec) 00:37:03.330 slat (nsec): min=5889, max=31644, avg=6533.26, stdev=936.73 00:37:03.330 clat (usec): min=7545, max=93507, avg=15434.29, stdev=14769.12 00:37:03.330 lat (usec): min=7551, max=93514, avg=15440.83, stdev=14769.17 00:37:03.330 clat percentiles (usec): 00:37:03.330 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:37:03.330 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:37:03.330 | 70.00th=[10814], 80.00th=[11207], 90.00th=[50070], 95.00th=[51643], 00:37:03.330 | 99.00th=[53216], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:37:03.330 | 99.99th=[93848] 00:37:03.330 bw ( KiB/s): min=15872, max=32000, per=22.66%, avg=24926.32, stdev=4764.33, samples=19 00:37:03.330 iops : min= 124, max= 250, avg=194.74, stdev=37.22, samples=19 00:37:03.330 lat (msec) : 10=39.05%, 20=48.95%, 50=1.64%, 100=10.36% 00:37:03.331 cpu : usr=95.60%, sys=4.17%, ctx=19, majf=0, minf=94 00:37:03.331 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.331 issued rwts: total=1949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.331 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:03.331 filename0: (groupid=0, jobs=1): err= 0: pid=3713185: Wed Nov 20 07:51:19 2024 00:37:03.331 read: IOPS=325, BW=40.7MiB/s (42.7MB/s)(407MiB/10003msec) 00:37:03.331 slat (nsec): min=5899, max=31168, avg=6709.50, stdev=1053.13 00:37:03.331 clat (usec): min=6034, max=51256, avg=9201.63, stdev=1861.58 00:37:03.331 lat (usec): min=6040, max=51287, avg=9208.34, stdev=1861.84 00:37:03.331 clat percentiles (usec): 00:37:03.331 | 1.00th=[ 6652], 5.00th=[ 7111], 10.00th=[ 7308], 20.00th=[ 7701], 00:37:03.331 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[ 9765], 00:37:03.331 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10814], 95.00th=[11207], 00:37:03.331 | 99.00th=[11731], 99.50th=[12125], 99.90th=[13566], 99.95th=[51119], 00:37:03.331 | 99.99th=[51119] 00:37:03.331 bw ( KiB/s): min=38656, max=44544, per=37.90%, avg=41687.58, stdev=1755.59, samples=19 00:37:03.331 iops : min= 302, max= 348, avg=325.68, stdev=13.72, samples=19 00:37:03.331 lat (msec) : 10=67.07%, 20=32.84%, 100=0.09% 00:37:03.331 cpu : usr=92.26%, sys=6.61%, ctx=540, majf=0, minf=150 00:37:03.331 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.331 issued rwts: total=3258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.331 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:03.331 00:37:03.331 Run status group 0 (all jobs): 00:37:03.331 READ: bw=107MiB/s (113MB/s), 24.2MiB/s-42.6MiB/s (25.4MB/s-44.7MB/s), io=1079MiB (1132MB), run=10003-10047msec 00:37:03.331 07:51:19 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:03.331 07:51:19 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:03.331 07:51:19 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:03.331 07:51:19 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:03.331 07:51:19 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:03.331 07:51:19 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:03.331 07:51:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.331 07:51:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:03.331 07:51:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.331 07:51:19 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:03.331 07:51:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.331 07:51:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:03.331 07:51:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.331 00:37:03.331 real 0m11.154s 00:37:03.331 user 0m41.577s 00:37:03.331 sys 0m1.979s 00:37:03.331 07:51:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:03.331 07:51:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:03.331 ************************************ 00:37:03.331 END TEST fio_dif_digest 00:37:03.331 ************************************ 00:37:03.331 07:51:19 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:03.331 07:51:19 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:03.331 07:51:19 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:03.331 07:51:19 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:03.331 07:51:19 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:03.331 07:51:19 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:03.331 07:51:19 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:03.331 07:51:19 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:03.331 rmmod nvme_tcp 00:37:03.331 rmmod nvme_fabrics 00:37:03.331 rmmod nvme_keyring 00:37:03.331 07:51:19 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:03.331 07:51:19 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:03.331 07:51:19 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:03.331 07:51:19 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3702910 ']' 00:37:03.331 07:51:19 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3702910 00:37:03.331 07:51:19 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 3702910 ']' 00:37:03.331 07:51:19 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 3702910 00:37:03.331 07:51:19 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:37:03.331 07:51:19 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:03.331 07:51:19 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3702910 00:37:03.331 07:51:19 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:03.331 07:51:19 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:03.331 07:51:19 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3702910' 00:37:03.331 killing process with pid 3702910 00:37:03.331 07:51:19 nvmf_dif -- common/autotest_common.sh@971 -- # kill 3702910 00:37:03.331 07:51:19 nvmf_dif -- common/autotest_common.sh@976 -- # wait 3702910 00:37:03.331 07:51:19 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:03.331 07:51:19 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:05.246 Waiting for block devices as requested 00:37:05.246 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:05.246 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:05.246 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:05.246 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:05.506 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:05.506 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:05.506 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:05.765 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:05.765 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:06.026 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:06.026 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:06.026 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:06.287 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:06.287 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:06.287 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:06.547 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:06.547 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:06.807 07:51:24 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:06.807 07:51:24 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:06.807 07:51:24 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:06.807 07:51:24 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:06.807 07:51:24 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:06.807 07:51:24 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:06.807 07:51:24 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:06.807 07:51:24 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:06.807 07:51:24 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:06.807 07:51:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:06.807 07:51:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:09.351 07:51:27 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:09.351 00:37:09.351 real 1m18.855s 00:37:09.351 user 7m56.639s 00:37:09.351 sys 0m22.336s 00:37:09.351 07:51:27 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:09.351 07:51:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:09.351 ************************************ 00:37:09.351 END TEST nvmf_dif 00:37:09.351 ************************************ 00:37:09.351 07:51:27 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:09.352 07:51:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:09.352 07:51:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:09.352 07:51:27 -- common/autotest_common.sh@10 -- # set +x 00:37:09.352 ************************************ 00:37:09.352 START TEST nvmf_abort_qd_sizes 00:37:09.352 ************************************ 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:09.352 * Looking for test storage... 00:37:09.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:09.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:09.352 --rc genhtml_branch_coverage=1 00:37:09.352 --rc genhtml_function_coverage=1 00:37:09.352 --rc genhtml_legend=1 00:37:09.352 --rc geninfo_all_blocks=1 00:37:09.352 --rc geninfo_unexecuted_blocks=1 00:37:09.352 00:37:09.352 ' 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:09.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:09.352 --rc genhtml_branch_coverage=1 00:37:09.352 --rc genhtml_function_coverage=1 00:37:09.352 --rc genhtml_legend=1 00:37:09.352 --rc geninfo_all_blocks=1 00:37:09.352 --rc geninfo_unexecuted_blocks=1 00:37:09.352 00:37:09.352 ' 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:09.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:09.352 --rc genhtml_branch_coverage=1 00:37:09.352 --rc genhtml_function_coverage=1 00:37:09.352 --rc genhtml_legend=1 00:37:09.352 --rc geninfo_all_blocks=1 00:37:09.352 --rc geninfo_unexecuted_blocks=1 00:37:09.352 00:37:09.352 ' 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:09.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:09.352 --rc genhtml_branch_coverage=1 00:37:09.352 --rc genhtml_function_coverage=1 00:37:09.352 --rc genhtml_legend=1 00:37:09.352 --rc geninfo_all_blocks=1 00:37:09.352 --rc geninfo_unexecuted_blocks=1 00:37:09.352 00:37:09.352 ' 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:09.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:09.352 07:51:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:17.489 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:17.490 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:17.490 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:17.490 Found net devices under 0000:31:00.0: cvl_0_0 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:17.490 Found net devices under 0000:31:00.1: cvl_0_1 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:17.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:17.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:37:17.490 00:37:17.490 --- 10.0.0.2 ping statistics --- 00:37:17.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:17.490 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:17.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:17.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:37:17.490 00:37:17.490 --- 10.0.0.1 ping statistics --- 00:37:17.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:17.490 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:17.490 07:51:34 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:20.788 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:20.788 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:20.788 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:20.788 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:20.788 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:20.788 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:20.788 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:20.788 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:20.788 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:20.788 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:20.788 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:20.788 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:20.788 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:20.788 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:20.788 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:20.788 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:20.788 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:21.050 07:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:21.050 07:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:21.050 07:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:21.050 07:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:21.050 07:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:21.050 07:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:21.050 07:51:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:21.050 07:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:21.050 07:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:21.050 07:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:21.050 07:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3723299 00:37:21.050 07:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3723299 00:37:21.050 07:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:21.050 07:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 3723299 ']' 00:37:21.050 07:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:21.050 07:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:21.050 07:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:21.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:21.050 07:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:21.050 07:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:21.050 [2024-11-20 07:51:39.112991] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:37:21.050 [2024-11-20 07:51:39.113057] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:21.050 [2024-11-20 07:51:39.211609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:21.310 [2024-11-20 07:51:39.266881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:21.310 [2024-11-20 07:51:39.266932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:21.310 [2024-11-20 07:51:39.266941] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:21.310 [2024-11-20 07:51:39.266949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:21.310 [2024-11-20 07:51:39.266956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:21.310 [2024-11-20 07:51:39.269449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:21.310 [2024-11-20 07:51:39.269593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:21.310 [2024-11-20 07:51:39.269760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:21.310 [2024-11-20 07:51:39.269775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:21.880 07:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:21.880 07:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:37:21.880 07:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:21.880 07:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:21.880 07:51:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:21.880 07:51:39 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:21.880 07:51:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:21.880 07:51:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:21.880 07:51:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:21.880 07:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:21.880 07:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:21.880 07:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:21.880 07:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:21.880 07:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:21.880 07:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:21.880 07:51:39 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:21.880 07:51:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:21.880 07:51:40 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:21.880 07:51:40 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:21.880 07:51:40 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:21.880 07:51:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:21.880 07:51:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:21.880 07:51:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:21.880 07:51:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:21.880 07:51:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:21.880 07:51:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:21.880 ************************************ 00:37:21.880 START TEST spdk_target_abort 00:37:21.880 ************************************ 00:37:21.880 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:37:21.880 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:21.880 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:21.880 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.880 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:22.452 spdk_targetn1 00:37:22.452 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.452 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:22.452 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.452 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:22.452 [2024-11-20 07:51:40.363517] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:22.452 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:22.453 [2024-11-20 07:51:40.415972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:22.453 07:51:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:22.713 [2024-11-20 07:51:40.687349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1424 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:22.713 [2024-11-20 07:51:40.687401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00b3 p:1 m:0 dnr:0 00:37:26.009 Initializing NVMe Controllers 00:37:26.009 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:26.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:26.009 Initialization complete. Launching workers. 00:37:26.009 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14101, failed: 1 00:37:26.009 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3244, failed to submit 10858 00:37:26.009 success 754, unsuccessful 2490, failed 0 00:37:26.009 07:51:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:26.009 07:51:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:26.010 [2024-11-20 07:51:43.988436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200004e56000 PRP2 0x0 00:37:26.010 [2024-11-20 07:51:43.988474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:0023 p:1 m:0 dnr:0 00:37:26.010 [2024-11-20 07:51:43.996888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:312 len:8 PRP1 0x200004e54000 PRP2 0x0 00:37:26.010 [2024-11-20 07:51:43.996910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:0035 p:1 m:0 dnr:0 00:37:26.010 [2024-11-20 07:51:44.036889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:1208 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:37:26.010 [2024-11-20 07:51:44.036911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00a0 p:1 m:0 dnr:0 00:37:26.010 [2024-11-20 07:51:44.052855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:1592 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:37:26.010 [2024-11-20 07:51:44.052875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:00d0 p:1 m:0 dnr:0 00:37:26.010 [2024-11-20 07:51:44.076819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:2112 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:37:26.010 [2024-11-20 07:51:44.076840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:37:26.010 [2024-11-20 07:51:44.097989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:2656 len:8 PRP1 0x200004e40000 PRP2 0x0 00:37:26.010 [2024-11-20 07:51:44.098017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:26.269 [2024-11-20 07:51:44.467896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:11216 len:8 PRP1 0x200004e42000 PRP2 0x0 00:37:26.269 [2024-11-20 07:51:44.467925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:29.569 Initializing NVMe Controllers 00:37:29.569 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:29.569 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:29.569 Initialization complete. Launching workers. 00:37:29.569 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8553, failed: 7 00:37:29.569 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1246, failed to submit 7314 00:37:29.569 success 322, unsuccessful 924, failed 0 00:37:29.569 07:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:29.569 07:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:29.569 [2024-11-20 07:51:47.709277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:38080 len:8 PRP1 0x200004af8000 PRP2 0x0 00:37:29.569 [2024-11-20 07:51:47.709307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00b8 p:0 m:0 dnr:0 00:37:30.140 [2024-11-20 07:51:48.327298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:156 nsid:1 lba:110696 len:8 PRP1 0x200004af8000 PRP2 0x0 00:37:30.140 [2024-11-20 07:51:48.327320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:156 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:32.681 Initializing NVMe Controllers 00:37:32.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:32.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:32.681 Initialization complete. Launching workers. 00:37:32.681 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43961, failed: 2 00:37:32.681 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2676, failed to submit 41287 00:37:32.681 success 620, unsuccessful 2056, failed 0 00:37:32.681 07:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:32.681 07:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.681 07:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:32.681 07:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.681 07:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:32.681 07:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.681 07:51:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:34.061 07:51:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.061 07:51:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3723299 00:37:34.061 07:51:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 3723299 ']' 00:37:34.061 07:51:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 3723299 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3723299 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3723299' 00:37:34.322 killing process with pid 3723299 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 3723299 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 3723299 00:37:34.322 00:37:34.322 real 0m12.389s 00:37:34.322 user 0m50.500s 00:37:34.322 sys 0m2.060s 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:34.322 ************************************ 00:37:34.322 END TEST spdk_target_abort 00:37:34.322 ************************************ 00:37:34.322 07:51:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:34.322 07:51:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:34.322 07:51:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:34.322 07:51:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:34.322 ************************************ 00:37:34.322 START TEST kernel_target_abort 00:37:34.322 ************************************ 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:34.322 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:34.583 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:34.583 07:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:37.881 Waiting for block devices as requested 00:37:37.881 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:37.881 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:38.141 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:38.141 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:38.141 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:38.401 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:38.401 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:38.401 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:38.401 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:38.662 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:38.921 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:38.921 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:38.921 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:38.921 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:39.236 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:39.236 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:39.236 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:39.515 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:39.515 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:39.515 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:39.515 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:39.515 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:39.515 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:39.515 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:39.515 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:39.515 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:39.799 No valid GPT data, bailing 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:37:39.799 00:37:39.799 Discovery Log Number of Records 2, Generation counter 2 00:37:39.799 =====Discovery Log Entry 0====== 00:37:39.799 trtype: tcp 00:37:39.799 adrfam: ipv4 00:37:39.799 subtype: current discovery subsystem 00:37:39.799 treq: not specified, sq flow control disable supported 00:37:39.799 portid: 1 00:37:39.799 trsvcid: 4420 00:37:39.799 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:39.799 traddr: 10.0.0.1 00:37:39.799 eflags: none 00:37:39.799 sectype: none 00:37:39.799 =====Discovery Log Entry 1====== 00:37:39.799 trtype: tcp 00:37:39.799 adrfam: ipv4 00:37:39.799 subtype: nvme subsystem 00:37:39.799 treq: not specified, sq flow control disable supported 00:37:39.799 portid: 1 00:37:39.799 trsvcid: 4420 00:37:39.799 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:39.799 traddr: 10.0.0.1 00:37:39.799 eflags: none 00:37:39.799 sectype: none 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:39.799 07:51:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:43.122 Initializing NVMe Controllers 00:37:43.122 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:43.122 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:43.122 Initialization complete. Launching workers. 00:37:43.122 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66848, failed: 0 00:37:43.122 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 66848, failed to submit 0 00:37:43.122 success 0, unsuccessful 66848, failed 0 00:37:43.122 07:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:43.122 07:52:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:46.421 Initializing NVMe Controllers 00:37:46.421 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:46.421 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:46.421 Initialization complete. Launching workers. 00:37:46.421 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 116670, failed: 0 00:37:46.421 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29386, failed to submit 87284 00:37:46.421 success 0, unsuccessful 29386, failed 0 00:37:46.421 07:52:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:46.421 07:52:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:49.714 Initializing NVMe Controllers 00:37:49.714 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:49.714 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:49.714 Initialization complete. Launching workers. 00:37:49.714 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146973, failed: 0 00:37:49.714 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36782, failed to submit 110191 00:37:49.714 success 0, unsuccessful 36782, failed 0 00:37:49.714 07:52:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:49.714 07:52:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:49.714 07:52:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:37:49.714 07:52:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:49.714 07:52:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:49.714 07:52:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:49.714 07:52:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:49.714 07:52:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:49.714 07:52:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:49.714 07:52:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:53.011 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:53.011 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:53.011 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:53.011 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:53.011 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:53.011 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:53.011 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:53.011 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:53.011 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:53.011 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:53.011 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:53.011 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:53.011 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:53.011 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:53.011 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:53.011 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:54.920 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:55.180 00:37:55.180 real 0m20.611s 00:37:55.180 user 0m9.831s 00:37:55.180 sys 0m6.368s 00:37:55.180 07:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:55.180 07:52:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.180 ************************************ 00:37:55.180 END TEST kernel_target_abort 00:37:55.180 ************************************ 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:55.180 rmmod nvme_tcp 00:37:55.180 rmmod nvme_fabrics 00:37:55.180 rmmod nvme_keyring 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3723299 ']' 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3723299 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 3723299 ']' 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 3723299 00:37:55.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3723299) - No such process 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 3723299 is not found' 00:37:55.180 Process with pid 3723299 is not found 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:55.180 07:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:58.478 Waiting for block devices as requested 00:37:58.739 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:58.739 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:58.739 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:59.000 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:59.000 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:59.000 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:59.259 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:59.259 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:59.259 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:59.519 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:59.519 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:59.779 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:59.779 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:59.779 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:00.037 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:00.037 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:00.037 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:00.298 07:52:18 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:00.298 07:52:18 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:00.298 07:52:18 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:00.559 07:52:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:00.559 07:52:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:00.559 07:52:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:00.559 07:52:18 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:00.559 07:52:18 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:00.559 07:52:18 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:00.559 07:52:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:00.559 07:52:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:02.469 07:52:20 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:02.469 00:38:02.469 real 0m53.475s 00:38:02.469 user 1m5.878s 00:38:02.469 sys 0m19.909s 00:38:02.469 07:52:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:02.469 07:52:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:02.469 ************************************ 00:38:02.469 END TEST nvmf_abort_qd_sizes 00:38:02.469 ************************************ 00:38:02.469 07:52:20 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:02.469 07:52:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:02.469 07:52:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:02.469 07:52:20 -- common/autotest_common.sh@10 -- # set +x 00:38:02.730 ************************************ 00:38:02.730 START TEST keyring_file 00:38:02.730 ************************************ 00:38:02.730 07:52:20 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:02.730 * Looking for test storage... 00:38:02.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:02.730 07:52:20 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:02.730 07:52:20 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:38:02.730 07:52:20 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:02.730 07:52:20 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:02.730 07:52:20 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:02.730 07:52:20 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:02.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.730 --rc genhtml_branch_coverage=1 00:38:02.730 --rc genhtml_function_coverage=1 00:38:02.730 --rc genhtml_legend=1 00:38:02.730 --rc geninfo_all_blocks=1 00:38:02.730 --rc geninfo_unexecuted_blocks=1 00:38:02.730 00:38:02.730 ' 00:38:02.730 07:52:20 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:02.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.730 --rc genhtml_branch_coverage=1 00:38:02.730 --rc genhtml_function_coverage=1 00:38:02.730 --rc genhtml_legend=1 00:38:02.730 --rc geninfo_all_blocks=1 00:38:02.730 --rc geninfo_unexecuted_blocks=1 00:38:02.730 00:38:02.730 ' 00:38:02.730 07:52:20 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:02.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.730 --rc genhtml_branch_coverage=1 00:38:02.730 --rc genhtml_function_coverage=1 00:38:02.730 --rc genhtml_legend=1 00:38:02.730 --rc geninfo_all_blocks=1 00:38:02.730 --rc geninfo_unexecuted_blocks=1 00:38:02.730 00:38:02.730 ' 00:38:02.730 07:52:20 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:02.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.730 --rc genhtml_branch_coverage=1 00:38:02.730 --rc genhtml_function_coverage=1 00:38:02.730 --rc genhtml_legend=1 00:38:02.730 --rc geninfo_all_blocks=1 00:38:02.730 --rc geninfo_unexecuted_blocks=1 00:38:02.730 00:38:02.730 ' 00:38:02.730 07:52:20 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:02.730 07:52:20 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:02.730 07:52:20 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:02.730 07:52:20 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:02.731 07:52:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.731 07:52:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.731 07:52:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.731 07:52:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:02.731 07:52:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.731 07:52:20 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:02.731 07:52:20 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:02.731 07:52:20 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:02.731 07:52:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:02.731 07:52:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:02.731 07:52:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:02.731 07:52:20 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:02.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:02.731 07:52:20 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:02.731 07:52:20 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:02.731 07:52:20 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:02.731 07:52:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:02.731 07:52:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:02.731 07:52:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:02.731 07:52:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:02.731 07:52:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:02.731 07:52:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:02.731 07:52:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:02.731 07:52:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:02.731 07:52:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:02.731 07:52:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:02.731 07:52:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:02.731 07:52:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:02.731 07:52:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fEk7q84CMg 00:38:02.731 07:52:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:02.731 07:52:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:02.731 07:52:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:02.731 07:52:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:02.731 07:52:20 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:02.731 07:52:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:02.731 07:52:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:02.991 07:52:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fEk7q84CMg 00:38:02.991 07:52:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fEk7q84CMg 00:38:02.991 07:52:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.fEk7q84CMg 00:38:02.991 07:52:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:02.991 07:52:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:02.991 07:52:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:02.991 07:52:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:02.991 07:52:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:02.991 07:52:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:02.991 07:52:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.f2OxwTVpZ4 00:38:02.991 07:52:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:02.991 07:52:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:02.991 07:52:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:02.991 07:52:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:02.992 07:52:20 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:02.992 07:52:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:02.992 07:52:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:02.992 07:52:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.f2OxwTVpZ4 00:38:02.992 07:52:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.f2OxwTVpZ4 00:38:02.992 07:52:21 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.f2OxwTVpZ4 00:38:02.992 07:52:21 keyring_file -- keyring/file.sh@30 -- # tgtpid=3733699 00:38:02.992 07:52:21 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3733699 00:38:02.992 07:52:21 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:02.992 07:52:21 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3733699 ']' 00:38:02.992 07:52:21 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:02.992 07:52:21 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:02.992 07:52:21 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:02.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:02.992 07:52:21 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:02.992 07:52:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:02.992 [2024-11-20 07:52:21.105169] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:38:02.992 [2024-11-20 07:52:21.105246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733699 ] 00:38:03.251 [2024-11-20 07:52:21.198583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.251 [2024-11-20 07:52:21.251531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:03.822 07:52:21 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:03.822 07:52:21 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:03.822 07:52:21 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:03.822 07:52:21 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:03.822 07:52:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:03.822 [2024-11-20 07:52:21.933160] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:03.822 null0 00:38:03.822 [2024-11-20 07:52:21.965202] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:03.822 [2024-11-20 07:52:21.965764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:03.822 07:52:21 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:03.822 07:52:21 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:03.822 07:52:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:03.822 07:52:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:03.822 07:52:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:03.822 07:52:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:03.822 07:52:21 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:03.822 07:52:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:03.822 07:52:21 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:03.822 07:52:21 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:03.822 07:52:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:03.822 [2024-11-20 07:52:21.997263] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:03.822 request: 00:38:03.822 { 00:38:03.822 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:03.822 "secure_channel": false, 00:38:03.822 "listen_address": { 00:38:03.822 "trtype": "tcp", 00:38:03.822 "traddr": "127.0.0.1", 00:38:03.822 "trsvcid": "4420" 00:38:03.822 }, 00:38:03.822 "method": "nvmf_subsystem_add_listener", 00:38:03.822 "req_id": 1 00:38:03.822 } 00:38:03.822 Got JSON-RPC error response 00:38:03.822 response: 00:38:03.822 { 00:38:03.822 "code": -32602, 00:38:03.822 "message": "Invalid parameters" 00:38:03.822 } 00:38:03.822 07:52:22 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:03.822 07:52:22 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:03.822 07:52:22 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:03.822 07:52:22 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:03.822 07:52:22 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:03.822 07:52:22 keyring_file -- keyring/file.sh@47 -- # bperfpid=3733721 00:38:03.822 07:52:22 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3733721 /var/tmp/bperf.sock 00:38:03.822 07:52:22 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3733721 ']' 00:38:03.822 07:52:22 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:03.822 07:52:22 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:03.822 07:52:22 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:03.822 07:52:22 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:03.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:03.822 07:52:22 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:03.822 07:52:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:04.082 [2024-11-20 07:52:22.059996] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:38:04.082 [2024-11-20 07:52:22.060061] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733721 ] 00:38:04.082 [2024-11-20 07:52:22.152074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:04.083 [2024-11-20 07:52:22.205245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:05.024 07:52:22 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:05.024 07:52:22 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:05.024 07:52:22 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fEk7q84CMg 00:38:05.024 07:52:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fEk7q84CMg 00:38:05.024 07:52:23 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.f2OxwTVpZ4 00:38:05.024 07:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.f2OxwTVpZ4 00:38:05.284 07:52:23 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:05.284 07:52:23 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:05.284 07:52:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.284 07:52:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:05.284 07:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.284 07:52:23 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.fEk7q84CMg == \/\t\m\p\/\t\m\p\.\f\E\k\7\q\8\4\C\M\g ]] 00:38:05.284 07:52:23 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:05.284 07:52:23 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:05.284 07:52:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.284 07:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.284 07:52:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:05.543 07:52:23 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.f2OxwTVpZ4 == \/\t\m\p\/\t\m\p\.\f\2\O\x\w\T\V\p\Z\4 ]] 00:38:05.543 07:52:23 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:05.543 07:52:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:05.543 07:52:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:05.543 07:52:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.543 07:52:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:05.543 07:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.804 07:52:23 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:05.804 07:52:23 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:05.804 07:52:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:05.804 07:52:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:05.804 07:52:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.804 07:52:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:05.804 07:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:06.065 07:52:24 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:06.065 07:52:24 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:06.065 07:52:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:06.065 [2024-11-20 07:52:24.179953] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:06.065 nvme0n1 00:38:06.325 07:52:24 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:06.325 07:52:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:06.325 07:52:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:06.325 07:52:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:06.325 07:52:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:06.325 07:52:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:06.325 07:52:24 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:06.325 07:52:24 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:06.325 07:52:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:06.325 07:52:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:06.325 07:52:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:06.325 07:52:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:06.325 07:52:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:06.586 07:52:24 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:06.586 07:52:24 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:06.586 Running I/O for 1 seconds... 00:38:07.966 16401.00 IOPS, 64.07 MiB/s 00:38:07.966 Latency(us) 00:38:07.966 [2024-11-20T06:52:26.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:07.966 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:07.966 nvme0n1 : 1.00 16453.92 64.27 0.00 0.00 7764.59 2580.48 15947.09 00:38:07.966 [2024-11-20T06:52:26.176Z] =================================================================================================================== 00:38:07.966 [2024-11-20T06:52:26.176Z] Total : 16453.92 64.27 0.00 0.00 7764.59 2580.48 15947.09 00:38:07.966 { 00:38:07.966 "results": [ 00:38:07.966 { 00:38:07.966 "job": "nvme0n1", 00:38:07.966 "core_mask": "0x2", 00:38:07.966 "workload": "randrw", 00:38:07.966 "percentage": 50, 00:38:07.966 "status": "finished", 00:38:07.966 "queue_depth": 128, 00:38:07.966 "io_size": 4096, 00:38:07.966 "runtime": 1.004624, 00:38:07.966 "iops": 16453.91708738792, 00:38:07.966 "mibps": 64.27311362260906, 00:38:07.966 "io_failed": 0, 00:38:07.966 "io_timeout": 0, 00:38:07.966 "avg_latency_us": 7764.588305706797, 00:38:07.966 "min_latency_us": 2580.48, 00:38:07.966 "max_latency_us": 15947.093333333334 00:38:07.966 } 00:38:07.966 ], 00:38:07.966 "core_count": 1 00:38:07.966 } 00:38:07.966 07:52:25 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:07.966 07:52:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:07.966 07:52:25 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:07.966 07:52:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:07.966 07:52:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:07.966 07:52:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:07.966 07:52:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:07.966 07:52:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:07.966 07:52:26 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:07.966 07:52:26 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:07.966 07:52:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:07.966 07:52:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:07.966 07:52:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:07.966 07:52:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:07.966 07:52:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.226 07:52:26 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:08.226 07:52:26 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:08.226 07:52:26 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:08.226 07:52:26 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:08.226 07:52:26 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:08.226 07:52:26 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:08.226 07:52:26 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:08.226 07:52:26 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:08.226 07:52:26 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:08.226 07:52:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:08.485 [2024-11-20 07:52:26.488444] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:08.485 [2024-11-20 07:52:26.488930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf76cb0 (107): Transport endpoint is not connected 00:38:08.485 [2024-11-20 07:52:26.489923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf76cb0 (9): Bad file descriptor 00:38:08.485 [2024-11-20 07:52:26.490923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:08.485 [2024-11-20 07:52:26.490932] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:08.485 [2024-11-20 07:52:26.490941] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:08.485 [2024-11-20 07:52:26.490952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:08.485 request: 00:38:08.485 { 00:38:08.485 "name": "nvme0", 00:38:08.485 "trtype": "tcp", 00:38:08.485 "traddr": "127.0.0.1", 00:38:08.485 "adrfam": "ipv4", 00:38:08.485 "trsvcid": "4420", 00:38:08.485 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:08.485 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:08.485 "prchk_reftag": false, 00:38:08.485 "prchk_guard": false, 00:38:08.485 "hdgst": false, 00:38:08.485 "ddgst": false, 00:38:08.485 "psk": "key1", 00:38:08.485 "allow_unrecognized_csi": false, 00:38:08.485 "method": "bdev_nvme_attach_controller", 00:38:08.485 "req_id": 1 00:38:08.485 } 00:38:08.485 Got JSON-RPC error response 00:38:08.485 response: 00:38:08.485 { 00:38:08.485 "code": -5, 00:38:08.486 "message": "Input/output error" 00:38:08.486 } 00:38:08.486 07:52:26 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:08.486 07:52:26 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:08.486 07:52:26 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:08.486 07:52:26 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:08.486 07:52:26 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:08.486 07:52:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:08.486 07:52:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:08.486 07:52:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:08.486 07:52:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:08.486 07:52:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.486 07:52:26 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:08.486 07:52:26 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:08.486 07:52:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:08.486 07:52:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:08.486 07:52:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:08.486 07:52:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:08.486 07:52:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.745 07:52:26 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:08.745 07:52:26 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:08.745 07:52:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:09.004 07:52:27 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:09.004 07:52:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:09.263 07:52:27 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:09.263 07:52:27 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:09.263 07:52:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:09.263 07:52:27 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:09.263 07:52:27 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.fEk7q84CMg 00:38:09.263 07:52:27 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.fEk7q84CMg 00:38:09.263 07:52:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:09.263 07:52:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.fEk7q84CMg 00:38:09.263 07:52:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:09.263 07:52:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:09.263 07:52:27 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:09.263 07:52:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:09.263 07:52:27 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fEk7q84CMg 00:38:09.263 07:52:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fEk7q84CMg 00:38:09.522 [2024-11-20 07:52:27.566697] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fEk7q84CMg': 0100660 00:38:09.522 [2024-11-20 07:52:27.566715] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:09.522 request: 00:38:09.522 { 00:38:09.522 "name": "key0", 00:38:09.522 "path": "/tmp/tmp.fEk7q84CMg", 00:38:09.522 "method": "keyring_file_add_key", 00:38:09.522 "req_id": 1 00:38:09.522 } 00:38:09.522 Got JSON-RPC error response 00:38:09.522 response: 00:38:09.522 { 00:38:09.522 "code": -1, 00:38:09.522 "message": "Operation not permitted" 00:38:09.522 } 00:38:09.522 07:52:27 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:09.522 07:52:27 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:09.522 07:52:27 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:09.522 07:52:27 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:09.522 07:52:27 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.fEk7q84CMg 00:38:09.522 07:52:27 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fEk7q84CMg 00:38:09.522 07:52:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fEk7q84CMg 00:38:09.781 07:52:27 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.fEk7q84CMg 00:38:09.781 07:52:27 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:09.781 07:52:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:09.781 07:52:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:09.781 07:52:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:09.781 07:52:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:09.781 07:52:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:09.781 07:52:27 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:09.781 07:52:27 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:09.781 07:52:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:09.781 07:52:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:09.781 07:52:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:09.781 07:52:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:09.781 07:52:27 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:09.781 07:52:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:09.781 07:52:27 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:09.781 07:52:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:10.041 [2024-11-20 07:52:28.140152] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.fEk7q84CMg': No such file or directory 00:38:10.041 [2024-11-20 07:52:28.140170] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:10.041 [2024-11-20 07:52:28.140187] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:10.041 [2024-11-20 07:52:28.140195] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:10.041 [2024-11-20 07:52:28.140203] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:10.041 [2024-11-20 07:52:28.140210] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:10.041 request: 00:38:10.041 { 00:38:10.041 "name": "nvme0", 00:38:10.041 "trtype": "tcp", 00:38:10.041 "traddr": "127.0.0.1", 00:38:10.041 "adrfam": "ipv4", 00:38:10.041 "trsvcid": "4420", 00:38:10.041 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:10.041 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:10.041 "prchk_reftag": false, 00:38:10.041 "prchk_guard": false, 00:38:10.041 "hdgst": false, 00:38:10.041 "ddgst": false, 00:38:10.041 "psk": "key0", 00:38:10.041 "allow_unrecognized_csi": false, 00:38:10.041 "method": "bdev_nvme_attach_controller", 00:38:10.041 "req_id": 1 00:38:10.041 } 00:38:10.041 Got JSON-RPC error response 00:38:10.041 response: 00:38:10.041 { 00:38:10.041 "code": -19, 00:38:10.041 "message": "No such device" 00:38:10.041 } 00:38:10.041 07:52:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:10.041 07:52:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:10.041 07:52:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:10.041 07:52:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:10.041 07:52:28 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:10.041 07:52:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:10.306 07:52:28 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:10.306 07:52:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:10.306 07:52:28 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:10.306 07:52:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:10.306 07:52:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:10.306 07:52:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:10.306 07:52:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.15CRGTFhcp 00:38:10.306 07:52:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:10.306 07:52:28 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:10.306 07:52:28 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:10.306 07:52:28 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:10.306 07:52:28 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:10.306 07:52:28 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:10.306 07:52:28 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:10.306 07:52:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.15CRGTFhcp 00:38:10.306 07:52:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.15CRGTFhcp 00:38:10.306 07:52:28 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.15CRGTFhcp 00:38:10.306 07:52:28 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.15CRGTFhcp 00:38:10.306 07:52:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.15CRGTFhcp 00:38:10.566 07:52:28 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:10.566 07:52:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:10.566 nvme0n1 00:38:10.566 07:52:28 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:10.566 07:52:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:10.566 07:52:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:10.567 07:52:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:10.567 07:52:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:10.567 07:52:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:10.826 07:52:28 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:10.826 07:52:28 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:10.826 07:52:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:11.086 07:52:29 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:11.086 07:52:29 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:11.086 07:52:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:11.086 07:52:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:11.086 07:52:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:11.345 07:52:29 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:11.345 07:52:29 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:11.345 07:52:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:11.345 07:52:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:11.345 07:52:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:11.345 07:52:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:11.345 07:52:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:11.345 07:52:29 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:11.345 07:52:29 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:11.345 07:52:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:11.604 07:52:29 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:11.604 07:52:29 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:11.604 07:52:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:11.865 07:52:29 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:11.865 07:52:29 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.15CRGTFhcp 00:38:11.865 07:52:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.15CRGTFhcp 00:38:11.865 07:52:30 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.f2OxwTVpZ4 00:38:11.865 07:52:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.f2OxwTVpZ4 00:38:12.125 07:52:30 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:12.125 07:52:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:12.384 nvme0n1 00:38:12.384 07:52:30 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:12.384 07:52:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:12.645 07:52:30 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:12.645 "subsystems": [ 00:38:12.645 { 00:38:12.645 "subsystem": "keyring", 00:38:12.645 "config": [ 00:38:12.645 { 00:38:12.645 "method": "keyring_file_add_key", 00:38:12.645 "params": { 00:38:12.645 "name": "key0", 00:38:12.645 "path": "/tmp/tmp.15CRGTFhcp" 00:38:12.645 } 00:38:12.645 }, 00:38:12.645 { 00:38:12.645 "method": "keyring_file_add_key", 00:38:12.645 "params": { 00:38:12.645 "name": "key1", 00:38:12.645 "path": "/tmp/tmp.f2OxwTVpZ4" 00:38:12.645 } 00:38:12.645 } 00:38:12.645 ] 00:38:12.645 }, 00:38:12.645 { 00:38:12.645 "subsystem": "iobuf", 00:38:12.645 "config": [ 00:38:12.645 { 00:38:12.645 "method": "iobuf_set_options", 00:38:12.645 "params": { 00:38:12.645 "small_pool_count": 8192, 00:38:12.645 "large_pool_count": 1024, 00:38:12.645 "small_bufsize": 8192, 00:38:12.645 "large_bufsize": 135168, 00:38:12.645 "enable_numa": false 00:38:12.645 } 00:38:12.645 } 00:38:12.645 ] 00:38:12.645 }, 00:38:12.645 { 00:38:12.645 "subsystem": "sock", 00:38:12.645 "config": [ 00:38:12.645 { 00:38:12.645 "method": "sock_set_default_impl", 00:38:12.645 "params": { 00:38:12.645 "impl_name": "posix" 00:38:12.645 } 00:38:12.645 }, 00:38:12.645 { 00:38:12.645 "method": "sock_impl_set_options", 00:38:12.645 "params": { 00:38:12.645 "impl_name": "ssl", 00:38:12.645 "recv_buf_size": 4096, 00:38:12.645 "send_buf_size": 4096, 00:38:12.645 "enable_recv_pipe": true, 00:38:12.645 "enable_quickack": false, 00:38:12.645 "enable_placement_id": 0, 00:38:12.645 "enable_zerocopy_send_server": true, 00:38:12.645 "enable_zerocopy_send_client": false, 00:38:12.645 "zerocopy_threshold": 0, 00:38:12.645 "tls_version": 0, 00:38:12.645 "enable_ktls": false 00:38:12.645 } 00:38:12.645 }, 00:38:12.645 { 00:38:12.645 "method": "sock_impl_set_options", 00:38:12.645 "params": { 00:38:12.645 "impl_name": "posix", 00:38:12.645 "recv_buf_size": 2097152, 00:38:12.645 "send_buf_size": 2097152, 00:38:12.645 "enable_recv_pipe": true, 00:38:12.645 "enable_quickack": false, 00:38:12.645 "enable_placement_id": 0, 00:38:12.645 "enable_zerocopy_send_server": true, 00:38:12.645 "enable_zerocopy_send_client": false, 00:38:12.645 "zerocopy_threshold": 0, 00:38:12.645 "tls_version": 0, 00:38:12.645 "enable_ktls": false 00:38:12.645 } 00:38:12.645 } 00:38:12.645 ] 00:38:12.645 }, 00:38:12.645 { 00:38:12.645 "subsystem": "vmd", 00:38:12.645 "config": [] 00:38:12.645 }, 00:38:12.645 { 00:38:12.645 "subsystem": "accel", 00:38:12.645 "config": [ 00:38:12.645 { 00:38:12.645 "method": "accel_set_options", 00:38:12.645 "params": { 00:38:12.645 "small_cache_size": 128, 00:38:12.645 "large_cache_size": 16, 00:38:12.645 "task_count": 2048, 00:38:12.645 "sequence_count": 2048, 00:38:12.645 "buf_count": 2048 00:38:12.645 } 00:38:12.645 } 00:38:12.645 ] 00:38:12.645 }, 00:38:12.645 { 00:38:12.645 "subsystem": "bdev", 00:38:12.645 "config": [ 00:38:12.645 { 00:38:12.645 "method": "bdev_set_options", 00:38:12.645 "params": { 00:38:12.645 "bdev_io_pool_size": 65535, 00:38:12.645 "bdev_io_cache_size": 256, 00:38:12.645 "bdev_auto_examine": true, 00:38:12.645 "iobuf_small_cache_size": 128, 00:38:12.645 "iobuf_large_cache_size": 16 00:38:12.645 } 00:38:12.645 }, 00:38:12.645 { 00:38:12.645 "method": "bdev_raid_set_options", 00:38:12.645 "params": { 00:38:12.645 "process_window_size_kb": 1024, 00:38:12.645 "process_max_bandwidth_mb_sec": 0 00:38:12.645 } 00:38:12.645 }, 00:38:12.645 { 00:38:12.645 "method": "bdev_iscsi_set_options", 00:38:12.645 "params": { 00:38:12.645 "timeout_sec": 30 00:38:12.645 } 00:38:12.645 }, 00:38:12.645 { 00:38:12.645 "method": "bdev_nvme_set_options", 00:38:12.645 "params": { 00:38:12.645 "action_on_timeout": "none", 00:38:12.645 "timeout_us": 0, 00:38:12.645 "timeout_admin_us": 0, 00:38:12.645 "keep_alive_timeout_ms": 10000, 00:38:12.645 "arbitration_burst": 0, 00:38:12.645 "low_priority_weight": 0, 00:38:12.645 "medium_priority_weight": 0, 00:38:12.645 "high_priority_weight": 0, 00:38:12.645 "nvme_adminq_poll_period_us": 10000, 00:38:12.645 "nvme_ioq_poll_period_us": 0, 00:38:12.645 "io_queue_requests": 512, 00:38:12.645 "delay_cmd_submit": true, 00:38:12.645 "transport_retry_count": 4, 00:38:12.645 "bdev_retry_count": 3, 00:38:12.645 "transport_ack_timeout": 0, 00:38:12.645 "ctrlr_loss_timeout_sec": 0, 00:38:12.645 "reconnect_delay_sec": 0, 00:38:12.645 "fast_io_fail_timeout_sec": 0, 00:38:12.645 "disable_auto_failback": false, 00:38:12.645 "generate_uuids": false, 00:38:12.645 "transport_tos": 0, 00:38:12.645 "nvme_error_stat": false, 00:38:12.645 "rdma_srq_size": 0, 00:38:12.645 "io_path_stat": false, 00:38:12.645 "allow_accel_sequence": false, 00:38:12.645 "rdma_max_cq_size": 0, 00:38:12.645 "rdma_cm_event_timeout_ms": 0, 00:38:12.645 "dhchap_digests": [ 00:38:12.645 "sha256", 00:38:12.645 "sha384", 00:38:12.645 "sha512" 00:38:12.645 ], 00:38:12.645 "dhchap_dhgroups": [ 00:38:12.645 "null", 00:38:12.645 "ffdhe2048", 00:38:12.645 "ffdhe3072", 00:38:12.645 "ffdhe4096", 00:38:12.645 "ffdhe6144", 00:38:12.645 "ffdhe8192" 00:38:12.645 ] 00:38:12.645 } 00:38:12.645 }, 00:38:12.645 { 00:38:12.645 "method": "bdev_nvme_attach_controller", 00:38:12.645 "params": { 00:38:12.645 "name": "nvme0", 00:38:12.645 "trtype": "TCP", 00:38:12.645 "adrfam": "IPv4", 00:38:12.645 "traddr": "127.0.0.1", 00:38:12.645 "trsvcid": "4420", 00:38:12.645 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:12.645 "prchk_reftag": false, 00:38:12.645 "prchk_guard": false, 00:38:12.645 "ctrlr_loss_timeout_sec": 0, 00:38:12.645 "reconnect_delay_sec": 0, 00:38:12.645 "fast_io_fail_timeout_sec": 0, 00:38:12.645 "psk": "key0", 00:38:12.645 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:12.645 "hdgst": false, 00:38:12.645 "ddgst": false, 00:38:12.645 "multipath": "multipath" 00:38:12.645 } 00:38:12.645 }, 00:38:12.645 { 00:38:12.645 "method": "bdev_nvme_set_hotplug", 00:38:12.645 "params": { 00:38:12.645 "period_us": 100000, 00:38:12.645 "enable": false 00:38:12.645 } 00:38:12.645 }, 00:38:12.645 { 00:38:12.645 "method": "bdev_wait_for_examine" 00:38:12.645 } 00:38:12.645 ] 00:38:12.645 }, 00:38:12.645 { 00:38:12.645 "subsystem": "nbd", 00:38:12.645 "config": [] 00:38:12.646 } 00:38:12.646 ] 00:38:12.646 }' 00:38:12.646 07:52:30 keyring_file -- keyring/file.sh@115 -- # killprocess 3733721 00:38:12.646 07:52:30 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3733721 ']' 00:38:12.646 07:52:30 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3733721 00:38:12.646 07:52:30 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:12.646 07:52:30 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:12.646 07:52:30 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3733721 00:38:12.646 07:52:30 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:12.646 07:52:30 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:12.646 07:52:30 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3733721' 00:38:12.646 killing process with pid 3733721 00:38:12.646 07:52:30 keyring_file -- common/autotest_common.sh@971 -- # kill 3733721 00:38:12.646 Received shutdown signal, test time was about 1.000000 seconds 00:38:12.646 00:38:12.646 Latency(us) 00:38:12.646 [2024-11-20T06:52:30.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.646 [2024-11-20T06:52:30.856Z] =================================================================================================================== 00:38:12.646 [2024-11-20T06:52:30.856Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:12.646 07:52:30 keyring_file -- common/autotest_common.sh@976 -- # wait 3733721 00:38:12.646 07:52:30 keyring_file -- keyring/file.sh@118 -- # bperfpid=3735524 00:38:12.646 07:52:30 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3735524 /var/tmp/bperf.sock 00:38:12.646 07:52:30 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3735524 ']' 00:38:12.646 07:52:30 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:12.646 07:52:30 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:12.646 07:52:30 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:12.646 07:52:30 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:12.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:12.646 07:52:30 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:12.646 07:52:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:12.646 07:52:30 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:12.646 "subsystems": [ 00:38:12.646 { 00:38:12.646 "subsystem": "keyring", 00:38:12.646 "config": [ 00:38:12.646 { 00:38:12.646 "method": "keyring_file_add_key", 00:38:12.646 "params": { 00:38:12.646 "name": "key0", 00:38:12.646 "path": "/tmp/tmp.15CRGTFhcp" 00:38:12.646 } 00:38:12.646 }, 00:38:12.646 { 00:38:12.646 "method": "keyring_file_add_key", 00:38:12.646 "params": { 00:38:12.646 "name": "key1", 00:38:12.646 "path": "/tmp/tmp.f2OxwTVpZ4" 00:38:12.646 } 00:38:12.646 } 00:38:12.646 ] 00:38:12.646 }, 00:38:12.646 { 00:38:12.646 "subsystem": "iobuf", 00:38:12.646 "config": [ 00:38:12.646 { 00:38:12.646 "method": "iobuf_set_options", 00:38:12.646 "params": { 00:38:12.646 "small_pool_count": 8192, 00:38:12.646 "large_pool_count": 1024, 00:38:12.646 "small_bufsize": 8192, 00:38:12.646 "large_bufsize": 135168, 00:38:12.646 "enable_numa": false 00:38:12.646 } 00:38:12.646 } 00:38:12.646 ] 00:38:12.646 }, 00:38:12.646 { 00:38:12.646 "subsystem": "sock", 00:38:12.646 "config": [ 00:38:12.646 { 00:38:12.646 "method": "sock_set_default_impl", 00:38:12.646 "params": { 00:38:12.646 "impl_name": "posix" 00:38:12.646 } 00:38:12.646 }, 00:38:12.646 { 00:38:12.646 "method": "sock_impl_set_options", 00:38:12.646 "params": { 00:38:12.646 "impl_name": "ssl", 00:38:12.646 "recv_buf_size": 4096, 00:38:12.646 "send_buf_size": 4096, 00:38:12.646 "enable_recv_pipe": true, 00:38:12.646 "enable_quickack": false, 00:38:12.646 "enable_placement_id": 0, 00:38:12.646 "enable_zerocopy_send_server": true, 00:38:12.646 "enable_zerocopy_send_client": false, 00:38:12.646 "zerocopy_threshold": 0, 00:38:12.646 "tls_version": 0, 00:38:12.646 "enable_ktls": false 00:38:12.646 } 00:38:12.646 }, 00:38:12.646 { 00:38:12.646 "method": "sock_impl_set_options", 00:38:12.646 "params": { 00:38:12.646 "impl_name": "posix", 00:38:12.646 "recv_buf_size": 2097152, 00:38:12.646 "send_buf_size": 2097152, 00:38:12.646 "enable_recv_pipe": true, 00:38:12.646 "enable_quickack": false, 00:38:12.646 "enable_placement_id": 0, 00:38:12.646 "enable_zerocopy_send_server": true, 00:38:12.646 "enable_zerocopy_send_client": false, 00:38:12.646 "zerocopy_threshold": 0, 00:38:12.646 "tls_version": 0, 00:38:12.646 "enable_ktls": false 00:38:12.646 } 00:38:12.646 } 00:38:12.646 ] 00:38:12.646 }, 00:38:12.646 { 00:38:12.646 "subsystem": "vmd", 00:38:12.646 "config": [] 00:38:12.646 }, 00:38:12.646 { 00:38:12.646 "subsystem": "accel", 00:38:12.646 "config": [ 00:38:12.646 { 00:38:12.646 "method": "accel_set_options", 00:38:12.646 "params": { 00:38:12.646 "small_cache_size": 128, 00:38:12.646 "large_cache_size": 16, 00:38:12.646 "task_count": 2048, 00:38:12.646 "sequence_count": 2048, 00:38:12.646 "buf_count": 2048 00:38:12.646 } 00:38:12.646 } 00:38:12.646 ] 00:38:12.646 }, 00:38:12.646 { 00:38:12.646 "subsystem": "bdev", 00:38:12.646 "config": [ 00:38:12.646 { 00:38:12.646 "method": "bdev_set_options", 00:38:12.646 "params": { 00:38:12.646 "bdev_io_pool_size": 65535, 00:38:12.646 "bdev_io_cache_size": 256, 00:38:12.646 "bdev_auto_examine": true, 00:38:12.646 "iobuf_small_cache_size": 128, 00:38:12.646 "iobuf_large_cache_size": 16 00:38:12.646 } 00:38:12.646 }, 00:38:12.646 { 00:38:12.646 "method": "bdev_raid_set_options", 00:38:12.646 "params": { 00:38:12.646 "process_window_size_kb": 1024, 00:38:12.646 "process_max_bandwidth_mb_sec": 0 00:38:12.646 } 00:38:12.646 }, 00:38:12.646 { 00:38:12.646 "method": "bdev_iscsi_set_options", 00:38:12.646 "params": { 00:38:12.646 "timeout_sec": 30 00:38:12.646 } 00:38:12.646 }, 00:38:12.646 { 00:38:12.646 "method": "bdev_nvme_set_options", 00:38:12.646 "params": { 00:38:12.646 "action_on_timeout": "none", 00:38:12.646 "timeout_us": 0, 00:38:12.646 "timeout_admin_us": 0, 00:38:12.646 "keep_alive_timeout_ms": 10000, 00:38:12.646 "arbitration_burst": 0, 00:38:12.646 "low_priority_weight": 0, 00:38:12.646 "medium_priority_weight": 0, 00:38:12.646 "high_priority_weight": 0, 00:38:12.646 "nvme_adminq_poll_period_us": 10000, 00:38:12.646 "nvme_ioq_poll_period_us": 0, 00:38:12.646 "io_queue_requests": 512, 00:38:12.646 "delay_cmd_submit": true, 00:38:12.646 "transport_retry_count": 4, 00:38:12.646 "bdev_retry_count": 3, 00:38:12.646 "transport_ack_timeout": 0, 00:38:12.646 "ctrlr_loss_timeout_sec": 0, 00:38:12.646 "reconnect_delay_sec": 0, 00:38:12.646 "fast_io_fail_timeout_sec": 0, 00:38:12.646 "disable_auto_failback": false, 00:38:12.646 "generate_uuids": false, 00:38:12.646 "transport_tos": 0, 00:38:12.646 "nvme_error_stat": false, 00:38:12.646 "rdma_srq_size": 0, 00:38:12.646 "io_path_stat": false, 00:38:12.646 "allow_accel_sequence": false, 00:38:12.646 "rdma_max_cq_size": 0, 00:38:12.646 "rdma_cm_event_timeout_ms": 0, 00:38:12.646 "dhchap_digests": [ 00:38:12.646 "sha256", 00:38:12.646 "sha384", 00:38:12.646 "sha512" 00:38:12.646 ], 00:38:12.646 "dhchap_dhgroups": [ 00:38:12.646 "null", 00:38:12.646 "ffdhe2048", 00:38:12.646 "ffdhe3072", 00:38:12.646 "ffdhe4096", 00:38:12.646 "ffdhe6144", 00:38:12.646 "ffdhe8192" 00:38:12.646 ] 00:38:12.646 } 00:38:12.646 }, 00:38:12.646 { 00:38:12.646 "method": "bdev_nvme_attach_controller", 00:38:12.646 "params": { 00:38:12.646 "name": "nvme0", 00:38:12.646 "trtype": "TCP", 00:38:12.646 "adrfam": "IPv4", 00:38:12.646 "traddr": "127.0.0.1", 00:38:12.646 "trsvcid": "4420", 00:38:12.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:12.646 "prchk_reftag": false, 00:38:12.646 "prchk_guard": false, 00:38:12.646 "ctrlr_loss_timeout_sec": 0, 00:38:12.646 "reconnect_delay_sec": 0, 00:38:12.646 "fast_io_fail_timeout_sec": 0, 00:38:12.646 "psk": "key0", 00:38:12.646 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:12.646 "hdgst": false, 00:38:12.646 "ddgst": false, 00:38:12.646 "multipath": "multipath" 00:38:12.646 } 00:38:12.646 }, 00:38:12.646 { 00:38:12.646 "method": "bdev_nvme_set_hotplug", 00:38:12.646 "params": { 00:38:12.646 "period_us": 100000, 00:38:12.646 "enable": false 00:38:12.646 } 00:38:12.646 }, 00:38:12.646 { 00:38:12.646 "method": "bdev_wait_for_examine" 00:38:12.646 } 00:38:12.646 ] 00:38:12.646 }, 00:38:12.646 { 00:38:12.646 "subsystem": "nbd", 00:38:12.646 "config": [] 00:38:12.646 } 00:38:12.646 ] 00:38:12.646 }' 00:38:12.906 [2024-11-20 07:52:30.886686] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:38:12.906 [2024-11-20 07:52:30.886751] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3735524 ] 00:38:12.906 [2024-11-20 07:52:30.971379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.906 [2024-11-20 07:52:31.000462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:13.165 [2024-11-20 07:52:31.144732] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:13.735 07:52:31 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:13.735 07:52:31 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:13.735 07:52:31 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:13.735 07:52:31 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:13.735 07:52:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:13.735 07:52:31 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:13.735 07:52:31 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:13.735 07:52:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:13.735 07:52:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:13.735 07:52:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:13.735 07:52:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:13.735 07:52:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:13.995 07:52:32 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:13.995 07:52:32 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:13.996 07:52:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:13.996 07:52:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:13.996 07:52:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:13.996 07:52:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:13.996 07:52:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:14.256 07:52:32 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:14.256 07:52:32 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:14.256 07:52:32 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:14.256 07:52:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:14.256 07:52:32 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:14.256 07:52:32 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:14.256 07:52:32 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.15CRGTFhcp /tmp/tmp.f2OxwTVpZ4 00:38:14.256 07:52:32 keyring_file -- keyring/file.sh@20 -- # killprocess 3735524 00:38:14.256 07:52:32 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3735524 ']' 00:38:14.256 07:52:32 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3735524 00:38:14.256 07:52:32 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:14.256 07:52:32 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:14.256 07:52:32 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3735524 00:38:14.515 07:52:32 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:14.515 07:52:32 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:14.515 07:52:32 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3735524' 00:38:14.515 killing process with pid 3735524 00:38:14.515 07:52:32 keyring_file -- common/autotest_common.sh@971 -- # kill 3735524 00:38:14.515 Received shutdown signal, test time was about 1.000000 seconds 00:38:14.515 00:38:14.515 Latency(us) 00:38:14.515 [2024-11-20T06:52:32.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:14.515 [2024-11-20T06:52:32.725Z] =================================================================================================================== 00:38:14.515 [2024-11-20T06:52:32.725Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:14.515 07:52:32 keyring_file -- common/autotest_common.sh@976 -- # wait 3735524 00:38:14.515 07:52:32 keyring_file -- keyring/file.sh@21 -- # killprocess 3733699 00:38:14.515 07:52:32 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3733699 ']' 00:38:14.515 07:52:32 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3733699 00:38:14.515 07:52:32 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:14.515 07:52:32 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:14.515 07:52:32 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3733699 00:38:14.515 07:52:32 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:14.515 07:52:32 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:14.515 07:52:32 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3733699' 00:38:14.515 killing process with pid 3733699 00:38:14.515 07:52:32 keyring_file -- common/autotest_common.sh@971 -- # kill 3733699 00:38:14.515 07:52:32 keyring_file -- common/autotest_common.sh@976 -- # wait 3733699 00:38:14.775 00:38:14.775 real 0m12.169s 00:38:14.775 user 0m29.324s 00:38:14.775 sys 0m2.730s 00:38:14.775 07:52:32 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:14.775 07:52:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:14.775 ************************************ 00:38:14.775 END TEST keyring_file 00:38:14.775 ************************************ 00:38:14.775 07:52:32 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:38:14.775 07:52:32 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:14.775 07:52:32 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:38:14.775 07:52:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:14.775 07:52:32 -- common/autotest_common.sh@10 -- # set +x 00:38:14.775 ************************************ 00:38:14.775 START TEST keyring_linux 00:38:14.775 ************************************ 00:38:14.775 07:52:32 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:14.775 Joined session keyring: 420778935 00:38:15.036 * Looking for test storage... 00:38:15.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:15.036 07:52:33 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:15.036 07:52:33 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:38:15.036 07:52:33 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:15.036 07:52:33 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:15.036 07:52:33 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:15.036 07:52:33 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:15.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.036 --rc genhtml_branch_coverage=1 00:38:15.036 --rc genhtml_function_coverage=1 00:38:15.036 --rc genhtml_legend=1 00:38:15.036 --rc geninfo_all_blocks=1 00:38:15.036 --rc geninfo_unexecuted_blocks=1 00:38:15.036 00:38:15.036 ' 00:38:15.036 07:52:33 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:15.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.036 --rc genhtml_branch_coverage=1 00:38:15.036 --rc genhtml_function_coverage=1 00:38:15.036 --rc genhtml_legend=1 00:38:15.036 --rc geninfo_all_blocks=1 00:38:15.036 --rc geninfo_unexecuted_blocks=1 00:38:15.036 00:38:15.036 ' 00:38:15.036 07:52:33 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:15.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.036 --rc genhtml_branch_coverage=1 00:38:15.036 --rc genhtml_function_coverage=1 00:38:15.036 --rc genhtml_legend=1 00:38:15.036 --rc geninfo_all_blocks=1 00:38:15.036 --rc geninfo_unexecuted_blocks=1 00:38:15.036 00:38:15.036 ' 00:38:15.036 07:52:33 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:15.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.036 --rc genhtml_branch_coverage=1 00:38:15.036 --rc genhtml_function_coverage=1 00:38:15.036 --rc genhtml_legend=1 00:38:15.036 --rc geninfo_all_blocks=1 00:38:15.036 --rc geninfo_unexecuted_blocks=1 00:38:15.036 00:38:15.036 ' 00:38:15.036 07:52:33 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:15.036 07:52:33 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:15.036 07:52:33 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:15.036 07:52:33 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.036 07:52:33 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.036 07:52:33 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.036 07:52:33 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:15.036 07:52:33 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:15.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:15.036 07:52:33 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:15.036 07:52:33 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:15.036 07:52:33 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:15.036 07:52:33 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:15.036 07:52:33 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:15.036 07:52:33 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:15.036 07:52:33 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:15.036 07:52:33 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:15.036 07:52:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:15.037 07:52:33 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:15.037 07:52:33 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:15.037 07:52:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:15.037 07:52:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:15.037 07:52:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:15.037 07:52:33 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:15.037 07:52:33 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:15.037 07:52:33 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:15.037 07:52:33 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:15.037 07:52:33 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:15.037 07:52:33 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:15.037 07:52:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:15.037 07:52:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:15.037 /tmp/:spdk-test:key0 00:38:15.037 07:52:33 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:15.037 07:52:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:15.037 07:52:33 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:15.037 07:52:33 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:15.037 07:52:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:15.037 07:52:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:15.037 07:52:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:15.037 07:52:33 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:15.037 07:52:33 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:15.037 07:52:33 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:15.037 07:52:33 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:15.037 07:52:33 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:15.037 07:52:33 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:15.037 07:52:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:15.297 07:52:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:15.297 /tmp/:spdk-test:key1 00:38:15.297 07:52:33 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3735987 00:38:15.297 07:52:33 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:15.297 07:52:33 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3735987 00:38:15.297 07:52:33 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3735987 ']' 00:38:15.297 07:52:33 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:15.297 07:52:33 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:15.297 07:52:33 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:15.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:15.297 07:52:33 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:15.297 07:52:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:15.297 [2024-11-20 07:52:33.310436] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:38:15.297 [2024-11-20 07:52:33.310492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3735987 ] 00:38:15.297 [2024-11-20 07:52:33.395103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:15.297 [2024-11-20 07:52:33.425438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:16.234 07:52:34 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:16.234 07:52:34 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:38:16.234 07:52:34 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:16.234 07:52:34 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.234 07:52:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:16.234 [2024-11-20 07:52:34.097727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:16.234 null0 00:38:16.234 [2024-11-20 07:52:34.129787] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:16.234 [2024-11-20 07:52:34.130141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:16.234 07:52:34 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.234 07:52:34 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:16.234 901973975 00:38:16.234 07:52:34 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:16.234 997212701 00:38:16.234 07:52:34 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3736295 00:38:16.234 07:52:34 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3736295 /var/tmp/bperf.sock 00:38:16.234 07:52:34 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:16.234 07:52:34 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3736295 ']' 00:38:16.234 07:52:34 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:16.234 07:52:34 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:16.234 07:52:34 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:16.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:16.235 07:52:34 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:16.235 07:52:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:16.235 [2024-11-20 07:52:34.209734] Starting SPDK v25.01-pre git sha1 12962b97e / DPDK 24.03.0 initialization... 00:38:16.235 [2024-11-20 07:52:34.209792] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3736295 ] 00:38:16.235 [2024-11-20 07:52:34.294806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.235 [2024-11-20 07:52:34.324905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:17.172 07:52:35 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:17.172 07:52:35 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:38:17.172 07:52:35 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:17.172 07:52:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:17.172 07:52:35 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:17.172 07:52:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:17.433 07:52:35 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:17.433 07:52:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:17.433 [2024-11-20 07:52:35.546216] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:17.433 nvme0n1 00:38:17.693 07:52:35 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:17.693 07:52:35 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:17.693 07:52:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:17.693 07:52:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:17.693 07:52:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:17.693 07:52:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:17.693 07:52:35 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:17.693 07:52:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:17.693 07:52:35 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:17.693 07:52:35 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:17.693 07:52:35 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:17.693 07:52:35 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:17.693 07:52:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:17.953 07:52:35 keyring_linux -- keyring/linux.sh@25 -- # sn=901973975 00:38:17.953 07:52:35 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:17.953 07:52:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:17.953 07:52:35 keyring_linux -- keyring/linux.sh@26 -- # [[ 901973975 == \9\0\1\9\7\3\9\7\5 ]] 00:38:17.953 07:52:35 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 901973975 00:38:17.953 07:52:35 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:17.953 07:52:35 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:17.953 Running I/O for 1 seconds... 00:38:18.893 24603.00 IOPS, 96.11 MiB/s 00:38:18.893 Latency(us) 00:38:18.893 [2024-11-20T06:52:37.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.893 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:18.893 nvme0n1 : 1.01 24602.39 96.10 0.00 0.00 5186.98 2293.76 6662.83 00:38:18.893 [2024-11-20T06:52:37.103Z] =================================================================================================================== 00:38:18.893 [2024-11-20T06:52:37.103Z] Total : 24602.39 96.10 0.00 0.00 5186.98 2293.76 6662.83 00:38:18.893 { 00:38:18.893 "results": [ 00:38:18.893 { 00:38:18.893 "job": "nvme0n1", 00:38:18.893 "core_mask": "0x2", 00:38:18.893 "workload": "randread", 00:38:18.893 "status": "finished", 00:38:18.893 "queue_depth": 128, 00:38:18.893 "io_size": 4096, 00:38:18.893 "runtime": 1.005268, 00:38:18.893 "iops": 24602.39458532451, 00:38:18.893 "mibps": 96.10310384892387, 00:38:18.893 "io_failed": 0, 00:38:18.893 "io_timeout": 0, 00:38:18.893 "avg_latency_us": 5186.97873092889, 00:38:18.893 "min_latency_us": 2293.76, 00:38:18.893 "max_latency_us": 6662.826666666667 00:38:18.893 } 00:38:18.893 ], 00:38:18.893 "core_count": 1 00:38:18.893 } 00:38:19.155 07:52:37 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:19.155 07:52:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:19.155 07:52:37 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:19.155 07:52:37 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:19.155 07:52:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:19.155 07:52:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:19.155 07:52:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:19.155 07:52:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:19.416 07:52:37 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:19.416 07:52:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:19.416 07:52:37 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:19.416 07:52:37 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:19.416 07:52:37 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:38:19.416 07:52:37 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:19.416 07:52:37 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:19.416 07:52:37 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:19.416 07:52:37 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:19.416 07:52:37 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:19.416 07:52:37 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:19.416 07:52:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:19.676 [2024-11-20 07:52:37.649041] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:19.676 [2024-11-20 07:52:37.649332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1013a60 (107): Transport endpoint is not connected 00:38:19.676 [2024-11-20 07:52:37.650323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1013a60 (9): Bad file descriptor 00:38:19.676 [2024-11-20 07:52:37.651324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:19.676 [2024-11-20 07:52:37.651332] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:19.676 [2024-11-20 07:52:37.651341] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:19.676 [2024-11-20 07:52:37.651351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:19.676 request: 00:38:19.676 { 00:38:19.676 "name": "nvme0", 00:38:19.676 "trtype": "tcp", 00:38:19.676 "traddr": "127.0.0.1", 00:38:19.676 "adrfam": "ipv4", 00:38:19.676 "trsvcid": "4420", 00:38:19.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:19.676 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:19.676 "prchk_reftag": false, 00:38:19.676 "prchk_guard": false, 00:38:19.676 "hdgst": false, 00:38:19.676 "ddgst": false, 00:38:19.676 "psk": ":spdk-test:key1", 00:38:19.676 "allow_unrecognized_csi": false, 00:38:19.676 "method": "bdev_nvme_attach_controller", 00:38:19.676 "req_id": 1 00:38:19.676 } 00:38:19.676 Got JSON-RPC error response 00:38:19.676 response: 00:38:19.676 { 00:38:19.676 "code": -5, 00:38:19.676 "message": "Input/output error" 00:38:19.676 } 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:19.676 07:52:37 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:19.676 07:52:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:19.676 07:52:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:19.676 07:52:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:19.676 07:52:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:19.676 07:52:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:19.676 07:52:37 keyring_linux -- keyring/linux.sh@33 -- # sn=901973975 00:38:19.676 07:52:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 901973975 00:38:19.676 1 links removed 00:38:19.676 07:52:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:19.676 07:52:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:19.676 07:52:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:19.676 07:52:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:19.676 07:52:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:19.676 07:52:37 keyring_linux -- keyring/linux.sh@33 -- # sn=997212701 00:38:19.676 07:52:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 997212701 00:38:19.676 1 links removed 00:38:19.676 07:52:37 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3736295 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3736295 ']' 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3736295 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3736295 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3736295' 00:38:19.676 killing process with pid 3736295 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@971 -- # kill 3736295 00:38:19.676 Received shutdown signal, test time was about 1.000000 seconds 00:38:19.676 00:38:19.676 Latency(us) 00:38:19.676 [2024-11-20T06:52:37.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:19.676 [2024-11-20T06:52:37.886Z] =================================================================================================================== 00:38:19.676 [2024-11-20T06:52:37.886Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@976 -- # wait 3736295 00:38:19.676 07:52:37 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3735987 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3735987 ']' 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3735987 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:19.676 07:52:37 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3735987 00:38:19.936 07:52:37 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:19.936 07:52:37 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:19.936 07:52:37 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3735987' 00:38:19.936 killing process with pid 3735987 00:38:19.936 07:52:37 keyring_linux -- common/autotest_common.sh@971 -- # kill 3735987 00:38:19.936 07:52:37 keyring_linux -- common/autotest_common.sh@976 -- # wait 3735987 00:38:19.936 00:38:19.936 real 0m5.184s 00:38:19.936 user 0m9.637s 00:38:19.936 sys 0m1.446s 00:38:19.936 07:52:38 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:19.936 07:52:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:19.936 ************************************ 00:38:19.936 END TEST keyring_linux 00:38:19.936 ************************************ 00:38:20.196 07:52:38 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:38:20.196 07:52:38 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:20.196 07:52:38 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:20.196 07:52:38 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:38:20.196 07:52:38 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:38:20.196 07:52:38 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:38:20.196 07:52:38 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:20.196 07:52:38 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:20.196 07:52:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:20.196 07:52:38 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:38:20.196 07:52:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:20.196 07:52:38 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:38:20.196 07:52:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:20.196 07:52:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:20.196 07:52:38 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:38:20.196 07:52:38 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:38:20.196 07:52:38 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:38:20.196 07:52:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:20.196 07:52:38 -- common/autotest_common.sh@10 -- # set +x 00:38:20.196 07:52:38 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:38:20.196 07:52:38 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:38:20.196 07:52:38 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:38:20.196 07:52:38 -- common/autotest_common.sh@10 -- # set +x 00:38:28.331 INFO: APP EXITING 00:38:28.331 INFO: killing all VMs 00:38:28.331 INFO: killing vhost app 00:38:28.331 INFO: EXIT DONE 00:38:30.875 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:30.875 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:30.875 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:30.875 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:31.166 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:31.166 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:31.166 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:31.166 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:31.166 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:31.166 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:31.166 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:31.166 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:31.166 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:31.166 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:31.166 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:31.166 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:31.459 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:35.689 Cleaning 00:38:35.689 Removing: /var/run/dpdk/spdk0/config 00:38:35.689 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:35.689 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:35.689 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:35.689 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:35.689 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:35.689 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:35.689 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:35.689 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:35.689 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:35.689 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:35.689 Removing: /var/run/dpdk/spdk1/config 00:38:35.689 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:35.689 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:35.689 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:35.689 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:35.689 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:35.689 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:35.689 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:35.689 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:35.689 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:35.689 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:35.689 Removing: /var/run/dpdk/spdk2/config 00:38:35.689 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:35.689 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:35.689 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:35.689 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:35.689 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:35.689 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:35.689 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:35.689 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:35.689 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:35.689 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:35.689 Removing: /var/run/dpdk/spdk3/config 00:38:35.689 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:35.689 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:35.689 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:35.689 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:35.689 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:35.689 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:35.689 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:35.689 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:35.689 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:35.689 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:35.689 Removing: /var/run/dpdk/spdk4/config 00:38:35.689 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:35.689 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:35.689 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:35.689 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:35.689 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:35.689 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:35.689 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:35.689 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:35.689 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:35.689 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:35.689 Removing: /dev/shm/bdev_svc_trace.1 00:38:35.689 Removing: /dev/shm/nvmf_trace.0 00:38:35.689 Removing: /dev/shm/spdk_tgt_trace.pid3155736 00:38:35.689 Removing: /var/run/dpdk/spdk0 00:38:35.689 Removing: /var/run/dpdk/spdk1 00:38:35.689 Removing: /var/run/dpdk/spdk2 00:38:35.690 Removing: /var/run/dpdk/spdk3 00:38:35.690 Removing: /var/run/dpdk/spdk4 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3154160 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3155736 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3156604 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3157763 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3158328 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3159515 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3159694 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3159989 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3161125 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3161845 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3162166 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3162521 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3162878 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3163212 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3163561 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3163909 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3164216 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3165367 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3168786 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3169173 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3169547 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3169705 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3170084 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3170414 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3170792 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3171009 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3171167 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3171501 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3171564 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3171876 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3172326 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3172676 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3173077 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3177638 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3183056 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3195113 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3195882 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3201019 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3201484 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3206873 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3214331 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3217544 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3230150 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3241271 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3243294 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3244326 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3266046 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3271105 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3327480 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3333937 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3340862 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3348880 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3348951 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3349957 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3350985 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3352018 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3352638 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3352779 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3352975 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3353140 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3353144 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3354148 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3355150 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3356158 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3356829 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3356836 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3357168 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3358611 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3359848 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3370297 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3404651 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3410239 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3412234 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3414355 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3414605 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3414949 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3415290 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3416009 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3418347 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3419444 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3420152 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3422541 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3423283 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3424227 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3429244 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3435971 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3435973 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3435974 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3440756 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3451170 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3456559 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3463940 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3465443 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3467166 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3468970 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3474457 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3479919 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3484994 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3494155 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3494157 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3499237 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3499563 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3499843 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3500242 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3500248 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3505972 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3506580 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3512599 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3515800 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3522358 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3528955 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3539237 00:38:35.690 Removing: /var/run/dpdk/spdk_pid3547789 00:38:35.950 Removing: /var/run/dpdk/spdk_pid3547843 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3571473 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3572231 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3573034 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3573845 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3574881 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3575593 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3576271 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3576959 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3582146 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3582391 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3589760 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3589952 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3596536 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3601704 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3613880 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3614677 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3619765 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3620116 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3625182 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3631959 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3635022 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3647296 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3658001 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3660002 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3661129 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3681287 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3686043 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3689249 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3697039 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3697045 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3703020 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3705430 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3707691 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3709032 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3711503 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3713028 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3723489 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3724152 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3724823 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3727778 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3728298 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3728802 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3733699 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3733721 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3735524 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3735987 00:38:35.951 Removing: /var/run/dpdk/spdk_pid3736295 00:38:35.951 Clean 00:38:36.212 07:52:54 -- common/autotest_common.sh@1451 -- # return 0 00:38:36.212 07:52:54 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:38:36.212 07:52:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:36.212 07:52:54 -- common/autotest_common.sh@10 -- # set +x 00:38:36.212 07:52:54 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:38:36.212 07:52:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:36.212 07:52:54 -- common/autotest_common.sh@10 -- # set +x 00:38:36.212 07:52:54 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:36.212 07:52:54 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:36.212 07:52:54 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:36.212 07:52:54 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:38:36.212 07:52:54 -- spdk/autotest.sh@394 -- # hostname 00:38:36.212 07:52:54 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-13 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:36.473 geninfo: WARNING: invalid characters removed from testname! 00:39:03.057 07:53:20 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:05.604 07:53:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:06.987 07:53:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:08.897 07:53:26 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:10.280 07:53:28 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:12.191 07:53:29 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:13.575 07:53:31 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:13.575 07:53:31 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:13.575 07:53:31 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:13.575 07:53:31 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:13.575 07:53:31 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:13.575 07:53:31 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:13.575 + [[ -n 3069259 ]] 00:39:13.575 + sudo kill 3069259 00:39:13.586 [Pipeline] } 00:39:13.604 [Pipeline] // stage 00:39:13.610 [Pipeline] } 00:39:13.626 [Pipeline] // timeout 00:39:13.632 [Pipeline] } 00:39:13.648 [Pipeline] // catchError 00:39:13.653 [Pipeline] } 00:39:13.686 [Pipeline] // wrap 00:39:13.692 [Pipeline] } 00:39:13.706 [Pipeline] // catchError 00:39:13.716 [Pipeline] stage 00:39:13.719 [Pipeline] { (Epilogue) 00:39:13.735 [Pipeline] catchError 00:39:13.736 [Pipeline] { 00:39:13.748 [Pipeline] echo 00:39:13.750 Cleanup processes 00:39:13.754 [Pipeline] sh 00:39:14.042 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:14.042 3749348 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:14.056 [Pipeline] sh 00:39:14.343 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:14.343 ++ grep -v 'sudo pgrep' 00:39:14.343 ++ awk '{print $1}' 00:39:14.343 + sudo kill -9 00:39:14.343 + true 00:39:14.357 [Pipeline] sh 00:39:14.650 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:26.896 [Pipeline] sh 00:39:27.186 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:27.186 Artifacts sizes are good 00:39:27.200 [Pipeline] archiveArtifacts 00:39:27.207 Archiving artifacts 00:39:27.363 [Pipeline] sh 00:39:27.657 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:27.671 [Pipeline] cleanWs 00:39:27.689 [WS-CLEANUP] Deleting project workspace... 00:39:27.689 [WS-CLEANUP] Deferred wipeout is used... 00:39:27.700 [WS-CLEANUP] done 00:39:27.701 [Pipeline] } 00:39:27.712 [Pipeline] // catchError 00:39:27.719 [Pipeline] sh 00:39:28.040 + logger -p user.info -t JENKINS-CI 00:39:28.051 [Pipeline] } 00:39:28.061 [Pipeline] // stage 00:39:28.064 [Pipeline] } 00:39:28.076 [Pipeline] // node 00:39:28.080 [Pipeline] End of Pipeline 00:39:28.115 Finished: SUCCESS